Sample records for image analysis code

  1. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  2. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  3. Displacement measurement with nanoscale resolution using a coded micro-mark and digital image correlation

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Ma, Chengfu; Chen, Yuhang

    2014-12-01

    A method for simple and reliable displacement measurement with nanoscale resolution is proposed. The measurement is realized by combining a common optical microscopy imaging of a specially coded nonperiodic microstructure, namely two-dimensional zero-reference mark (2-D ZRM), and subsequent correlation analysis of the obtained image sequence. The autocorrelation peak contrast of the ZRM code is maximized with well-developed artificial intelligence algorithms, which enables robust and accurate displacement determination. To improve the resolution, subpixel image correlation analysis is employed. Finally, we experimentally demonstrate the quasi-static and dynamic displacement characterization ability of a micro 2-D ZRM.

  4. Wavelet-based compression of M-FISH images.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R

    2005-05-01

    Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.

  5. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  6. Automatic removal of cosmic ray signatures in Deep Impact images

    NASA Astrophysics Data System (ADS)

    Ipatov, S. I.; A'Hearn, M. F.; Klaasen, K. P.

    The results of recognition of cosmic ray (CR) signatures on single images made during the Deep Impact mission were analyzed for several codes written by several authors. For automatic removal of CR signatures on many images, we suggest using the code imgclean ( http://pdssbn.astro.umd.edu/volume/didoc_0001/document/calibration_software/dical_v5/) written by E. Deutsch as other codes considered do not work properly automatically with a large number of images and do not run to completion for some images; however, other codes can be better for analysis of certain specific images. Sometimes imgclean detects false CR signatures near the edge of a comet nucleus, and it often does not recognize all pixels of long CR signatures. Our code rmcr is the only code among those considered that allows one to work with raw images. For most visual images made during low solar activity at exposure time t > 4 s, the number of clusters of bright pixels on an image per second per sq. cm of CCD was about 2-4, both for dark and normal sky images. At high solar activity, it sometimes exceeded 10. The ratio of the number of CR signatures consisting of n pixels obtained at high solar activity to that at low solar activity was greater for greater n. The number of clusters detected as CR signatures on a single infrared image is by at least a factor of several greater than the actual number of CR signatures; the number of clusters based on analysis of two successive dark infrared frames is in agreement with an expected number of CR signatures. Some glitches of false CR signatures include bright pixels repeatedly present on different infrared images. Our interactive code imr allows a user to choose the regions on a considered image where glitches detected by imgclean as CR signatures are ignored. In other regions chosen by the user, the brightness of some pixels is replaced by the local median brightness if the brightness of these pixels is greater by some factor than the median brightness. The interactive code allows one to delete long CR signatures and prevents removal of false CR signatures near the edge of the nucleus of the comet. The interactive code can be applied to editing any digital images. Results obtained can be used for other missions to comets.

  7. Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1992-01-01

    Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.

  8. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  9. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  10. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser

    PubMed Central

    Almeida, Jonas S.; Iriabho, Egiebade E.; Gorrepati, Vijaya L.; Wilkinson, Sean R.; Grüneberg, Alexander; Robbins, David E.; Hackney, James R.

    2012-01-01

    Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results: Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. Conclusions: The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local “download and installation”. PMID:22934238

  11. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser.

    PubMed

    Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R

    2012-01-01

    Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".

  12. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  13. Towards an Analysis of Visual Images in School Science Textbooks and Press Articles about Science and Technology

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Kostas; Koulaidis, Vasilis; Sklaveniti, Spyridoula

    2003-04-01

    This paper aims at presenting the application of a grid for the analysis of the pedagogic functions of visual images included in school science textbooks and daily press articles about science and technology. The analysis is made using the dimensions of content specialisation (classification) and social-pedagogic relationships (framing) promoted by the images as well as the elaboration and abstraction of the corresponding visual code (formality), thus combining pedagogical and socio-semiotic perspectives. The grid is applied to the analysis of 2819 visual images collected from school science textbooks and another 1630 visual images additionally collected from the press. The results show that the science textbooks in comparison to the press material: a) use ten times more images, b) use more images so as to familiarise their readers with the specialised techno-scientific content and codes, and c) tend to create a sense of higher empowerment for their readers by using the visual mode. Furthermore, as the educational level of the school science textbooks (i.e., from primary to lower secondary level) rises, the content specialisation projected by the visual images and the elaboration and abstraction of the corresponding visual code also increases. The above results have implications for the terms and conditions for the effective exploitation of visual material as the educational level rises as well as for the effective incorporation of visual images from press material into science classes.

  14. The Pan-STARRS PS1 Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  15. Environmental Characterization for Target Acquisition. Report 2. Analysis of Thermal and Visible Imagery

    DTIC Science & Technology

    1993-11-01

    4 Im age M etrics .......................................... 8 Analysis Procedures .................................... 14 3...trgtI’oi4.1 top) then ter jit I" to ,amtqts -i do eno; A26 Appendx A Metices Image Processing S,)ftware Source Code AGANETRIC 4 OF 8 Vat 1.J. k., I I integer...A 4 •A--TIC - OF 8 Appendx A Wbkri Image Prooiing Software Source Code A31 AGACOMPT I OF 3

  16. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Multiplex Quantitative Histologic Analysis of Human Breast Cancer Cell Signaling and Cell Fate

    DTIC Science & Technology

    2010-05-01

    Breast cancer, cell signaling, cell proliferation, histology, image analysis 15. NUMBER OF PAGES - 51 16. PRICE CODE 17. SECURITY CLASSIFICATION...revealed by individual stains in multiplex combinations; and (3) software (FARSIGHT) for automated multispectral image analysis that (i) segments...Task 3. Develop computational algorithms for multispectral immunohistological image analysis FARSIGHT software was developed to quantify intrinsic

  18. Novel microscopy-based screening method reveals regulators of contact-dependent intercellular transfer

    PubMed Central

    Michael Frei, Dominik; Hodneland, Erlend; Rios-Mondragon, Ivan; Burtey, Anne; Neumann, Beate; Bulkescher, Jutta; Schölermann, Julia; Pepperkok, Rainer; Gerdes, Hans-Hermann; Kögel, Tanja

    2015-01-01

    Contact-dependent intercellular transfer (codeIT) of cellular constituents can have functional consequences for recipient cells, such as enhanced survival and drug resistance. Pathogenic viruses, prions and bacteria can also utilize this mechanism to spread to adjacent cells and potentially evade immune detection. However, little is known about the molecular mechanism underlying this intercellular transfer process. Here, we present a novel microscopy-based screening method to identify regulators and cargo of codeIT. Single donor cells, carrying fluorescently labelled endocytic organelles or proteins, are co-cultured with excess acceptor cells. CodeIT is quantified by confocal microscopy and image analysis in 3D, preserving spatial information. An siRNA-based screening using this method revealed the involvement of several myosins and small GTPases as codeIT regulators. Our data indicates that cellular protrusions and tubular recycling endosomes are important for codeIT. We automated image acquisition and analysis to facilitate large-scale chemical and genetic screening efforts to identify key regulators of codeIT. PMID:26271723

  19. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  20. [Implications of mental image processing in the deficits of verbal information coding during normal aging].

    PubMed

    Plaie, Thierry; Thomas, Delphine

    2008-06-01

    Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  1. Low-level processing for real-time image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  2. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  3. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  4. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  5. End-to-end imaging information rate advantages of various alternative communication systems

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1982-01-01

    The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an advanced imaging communication system (AICS) which exhibits the rather significant advantages of sophisticated data compression coupled with powerful yet practical channel coding. For example, under certain conditions the improved AICS efficiency could provide as much as two orders of magnitude increase in imaging information rate compared to a single channel uncoded, uncompressed system while maintaining the same gse data rate in both systems. Additional details describing AICS compression and coding concepts as well as efforts to apply them are provided in support of the system analysis.

  6. Image Based Biomarker of Breast Cancer Risk: Analysis of Risk Disparity among Minority Populations

    DTIC Science & Technology

    2013-03-01

    TITLE: Image Based Biomarker of Breast Cancer Risk: Analysis of Risk Disparity among Minority Populations PRINCIPAL INVESTIGATOR: Fengshan Liu...SUBTITLE 5a. CONTRACT NUMBER Image Based Biomarker of Breast Cancer Risk: Analysis of Risk Disparity among Minority Populations 5b. GRANT NUMBER...identifying the prevalence of women with incomplete visualization of the breast . We developed a code to estimate the breast cancer risks using the

  7. An Open Source Agenda for Research Linking Text and Image Content Features.

    ERIC Educational Resources Information Center

    Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi

    2001-01-01

    Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…

  8. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  10. Swimsuit issues: promoting positive body image in young women's magazines.

    PubMed

    Boyd, Elizabeth Reid; Moncrieff-Boyd, Jessica

    2011-08-01

    This preliminary study reviews the promotion of healthy body image to young Australian women, following the 2009 introduction of the voluntary Industry Code of Conduct on Body Image. The Code includes using diverse sized models in magazines. A qualitative content analysis of the 2010 annual 'swimsuit issues' was conducted on 10 Australian young women's magazines. Pictorial and/or textual editorial evidence of promoting diverse body shapes and sizes was regarded as indicative of the magazines' upholding aspects of the voluntary Code of Conduct for Body Image. Diverse sized models were incorporated in four of the seven magazines with swimsuit features sampled. Body size differentials were presented as part of the swimsuit features in three of the magazines sampled. Tips for diverse body type enhancement were included in four of the magazines. All magazines met at least one criterion. One magazine displayed evidence of all three criteria. Preliminary examination suggests that more than half of young women's magazines are upholding elements of the voluntary Code of Conduct for Body Image, through representation of diverse-sized women in their swimsuit issues.

  11. Disability in Physical Education Textbooks: An Analysis of Image Content

    ERIC Educational Resources Information Center

    Taboas-Pais, Maria Ines; Rey-Cao, Ana

    2012-01-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted…

  12. Image Analysis and Modeling

    DTIC Science & Technology

    1975-08-01

    image analysis and processing tasks such as information extraction, image enhancement and restoration, coding, etc. The ultimate objective of this research is to form a basis for the development of technology relevant to military applications of machine extraction of information from aircraft and satellite imagery of the earth’s surface. This report discusses research activities during the three month period February 1 - April 30,

  13. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  14. Evaluation of a color-coded Landsat 5/6 ratio image for mapping lithologic differences in western South Dakota

    USGS Publications Warehouse

    Raines, Gary L.; Bretz, R.F.; Shurr, George W.

    1979-01-01

    From analysis of a color-coded Landsat 5/6 ratio, image, a map of the vegetation density distribution has been produced by Raines of 25,000 sq km of western South Dakota. This 5/6 ratio image is produced digitally calculating the ratios of the bands 5 and 6 of the Landsat data and then color coding these ratios in an image. Bretz and Shurr compared this vegetation density map with published and unpublished data primarily of the U.S. Geological Survey and the South Dakota Geological Survey; good correspondence is seen between this map and existing geologic maps, especially with the soils map. We believe that this Landsat ratio image can be used as a tool to refine existing maps of surficial geology and bedrock, where bedrock is exposed, and to improve mapping accuracy in areas of poor exposure common in South Dakota. In addition, this type of image could be a useful, additional tool in mapping areas that are unmapped.

  15. Large deformation image classification using generalized locality-constrained linear coding.

    PubMed

    Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.

  16. Real-time chirp-coded imaging with a programmable ultrasound biomicroscope.

    PubMed

    Bosisio, Mattéo R; Hasquenoph, Jean-Michel; Sandrin, Laurent; Laugier, Pascal; Bridal, S Lori; Yon, Sylvain

    2010-03-01

    Ultrasound biomicroscopy (UBM) of mice can provide a testing ground for new imaging strategies. The UBM system presented in this paper facilitates the development of imaging and measurement methods with programmable design, arbitrary waveform coding, broad bandwidth (2-80 MHz), digital filtering, programmable processing, RF data acquisition, multithread/multicore real-time display, and rapid mechanical scanning (

  17. Imaging Analysis of the Hard X-Ray Telescope ProtoEXIST2 and New Techniques for High-Resolution Coded-Aperture Telescopes

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.

    2016-01-01

    Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.

  18. Newspaper Coverage of the 1992 Presidential Campaign: A Content Analysis of Character/Competence/Image Issues versus Platform/Political Issues.

    ERIC Educational Resources Information Center

    Sims, Judy R.; Giordano, Joseph

    A research study assessed the amount of front page newspaper coverage allotted to "character/competence/image" issues versus "platform/political" issues in the 1992 presidential campaign. Using textual analysis, methodology of content analysis, researchers coded the front page of the following 5 newspapers between August 1 and…

  19. Reduction and coding of synthetic aperture radar data with Fourier transforms

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1995-01-01

    Recently, aboard the Space Radar Laboratory (SRL), the two roles of Fourier Transforms for ocean image synthesis and surface wave analysis have been implemented with a dedicated radar processor to significantly reduce Synthetic Aperture Radar (SAR) ocean data before transmission to the ground. The object was to archive the SAR image spectrum, rather than the SAR image itself, to reduce data volume and capture the essential descriptors of the surface wave field. SAR signal data are usually sampled and coded in the time domain for transmission to the ground where Fourier Transforms are applied both to individual radar pulses and to long sequences of radar pulses to form two-dimensional images. High resolution images of the ocean often contain no striking features and subtle image modulations by wind generated surface waves are only apparent when large ocean regions are studied, with Fourier transforms, to reveal periodic patterns created by wind stress over the surface wave field. Major ocean currents and atmospheric instability in coastal environments are apparent as large scale modulations of SAR imagery. This paper explores the possibility of computing complex Fourier spectrum codes representing SAR images, transmitting the coded spectra to Earth for data archives and creating scenes of surface wave signatures and air-sea interactions via inverse Fourier transformations with ground station processors.

  20. Design and validation of Segment--freely available software for cardiovascular image analysis.

    PubMed

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-11

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

  1. LittleQuickWarp: an ultrafast image warping tool.

    PubMed

    Qu, Lei; Peng, Hanchuan

    2015-02-01

    Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  3. Automated optical inspection and image analysis of superconducting radio-frequency cavities

    NASA Astrophysics Data System (ADS)

    Wenskat, M.

    2017-05-01

    The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. For an investigation of this inner surface of more than 100 cavities within the cavity fabrication for the European XFEL and the ILC HiGrade Research Project, an optical inspection robot OBACHT was constructed. To analyze up to 2325 images per cavity, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. The accuracy of this code is up to 97 % and the positive predictive value (PPV) 99 % within the resolution of 15.63 μm. The optical obtained surface roughness is in agreement with standard profilometric methods. The image analysis algorithm identified and quantified vendor specific fabrication properties as the electron beam welding speed and the different surface roughness due to the different chemical treatments. In addition, a correlation of ρ = -0.93 with a significance of 6 σ between an obtained surface variable and the maximal accelerating field was found.

  4. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  6. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  7. The application of coded excitation technology in medical ultrasonic Doppler imaging

    NASA Astrophysics Data System (ADS)

    Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin

    2008-03-01

    Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.

  8. Stroke Severity Affects Timing: Time From Stroke Code Activation to Initial Imaging is Longer in Patients With Milder Strokes.

    PubMed

    Kwei, Kimberly T; Liang, John; Wilson, Natalie; Tuhrim, Stanley; Dhamoon, Mandip

    2018-05-01

    Optimizing the time it takes to get a potential stroke patient to imaging is essential in a rapid stroke response. At our hospital, door-to-imaging time is comprised of 2 time periods: the time before a stroke is recognized, followed by the period after the stroke code is called during which the stroke team assesses and brings the patient to the computed tomography scanner. To control for delays due to triage, we isolated the time period after a potential stroke has been recognized, as few studies have examined the biases of stroke code responders. This "code-to-imaging time" (CIT) encompassed the time from stroke code activation to initial imaging, and we hypothesized that perception of stroke severity would affect how quickly stroke code responders act. In consecutively admitted ischemic stroke patients at The Mount Sinai Hospital emergency department, we tested associations between National Institutes of Health Stroke Scale scores (NIHSS), continuously and at different cutoffs, and CIT using spline regression, t tests for univariate analysis, and multivariable linear regression adjusting for age, sex, and race/ethnicity. In our study population, mean CIT was 26 minutes, and mean presentation NIHSS was 8. In univariate and multivariate analyses comparing CIT between mild and severe strokes, stroke scale scores <4 were associated with longer response times. Milder strokes are associated with a longer CIT with a threshold effect at a NIHSS of 4.

  9. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  10. Advanced imaging techniques in brain tumors

    PubMed Central

    2009-01-01

    Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287

  11. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press.

  12. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.

  13. High dynamic range coding imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  14. MO-F-CAMPUS-I-04: Characterization of Fan Beam Coded Aperture Coherent Scatter Spectral Imaging Methods for Differentiation of Normal and Neoplastic Breast Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, R; Albanese, K; Lakshmanan, M

    Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality formore » breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded aperture coherent scatter imaging a clinically viable method for breast cancer detection. United States Department of Homeland Security; Duke University Medical Center - Department of Radiology; Carl E Ravin Advanced Imaging Laboratories; Duke University Medical Physics Graduate Program.« less

  15. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  16. A Statistical Analysis of IrisCode and Its Security Implications.

    PubMed

    Kong, Adams Wai-Kin

    2015-03-01

    IrisCode has been used to gather iris data for 430 million people. Because of the huge impact of IrisCode, it is vital that it is completely understood. This paper first studies the relationship between bit probabilities and a mean of iris images (The mean of iris images is defined as the average of independent iris images.) and then uses the Chi-square statistic, the correlation coefficient and a resampling algorithm to detect statistical dependence between bits. The results show that the statistical dependence forms a graph with a sparse and structural adjacency matrix. A comparison of this graph with a graph whose edges are defined by the inner product of the Gabor filters that produce IrisCodes shows that partial statistical dependence is induced by the filters and propagates through the graph. Using this statistical information, the security risk associated with two patented template protection schemes that have been deployed in commercial systems for producing application-specific IrisCodes is analyzed. To retain high identification speed, they use the same key to lock all IrisCodes in a database. The belief has been that if the key is not compromised, the IrisCodes are secure. This study shows that even without the key, application-specific IrisCodes can be unlocked and that the key can be obtained through the statistical dependence detected.

  17. DynamiX, numerical tool for design of next-generation x-ray telescopes.

    PubMed

    Chauvin, Maxime; Roques, Jean-Pierre

    2010-07-20

    We present a new code aimed at the simulation of grazing-incidence x-ray telescopes subject to deformations and demonstrate its ability with two test cases: the Simbol-X and the International X-ray Observatory (IXO) missions. The code, based on Monte Carlo ray tracing, computes the full photon trajectories up to the detector plane, accounting for the x-ray interactions and for the telescope motion and deformation. The simulation produces images and spectra for any telescope configuration using Wolter I mirrors and semiconductor detectors. This numerical tool allows us to study the telescope performance in terms of angular resolution, effective area, and detector efficiency, accounting for the telescope behavior. We have implemented an image reconstruction method based on the measurement of the detector drifts by an optical sensor metrology. Using an accurate metrology, this method allows us to recover the loss of angular resolution induced by the telescope instability. In the framework of the Simbol-X mission, this code was used to study the impacts of the parameters on the telescope performance. In this paper we present detailed performance analysis of Simbol-X, taking into account the satellite motions and the image reconstruction. To illustrate the versatility of the code, we present an additional performance analysis with a particular configuration of IXO.

  18. Digital Image Correlation Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dan; Crozier, Paul; Reu, Phil

    DICe is an open source digital image correlation (DIC) tool intended for use as a module in an external application or as a standalone analysis code. It's primary capability is computing full-field displacements and strains from sequences of digital These images are typically of a material sample undergoing a materials characterization experiment, but DICe is also useful for other applications (for example, trajectory tracking). DICe is machine portable (Windows, Linux and Mac) and can be effectively deployed on a high performance computing platform. Capabilities from DICe can be invoked through a library interface, via source code integration of DICe classesmore » or through a graphical user interface.« less

  19. Visualization and Analysis of Microtubule Dynamics Using Dual Color-Coded Display of Plus-End Labels

    PubMed Central

    Garrison, Amy K.; Xia, Caihong; Wang, Zheng; Ma, Le

    2012-01-01

    Investigating spatial and temporal control of microtubule dynamics in live cells is critical to understanding cell morphogenesis in development and disease. Tracking fluorescently labeled plus-end-tracking proteins over time has become a widely used method to study microtubule assembly. Here, we report a complementary approach that uses only two images of these labels to visualize and analyze microtubule dynamics at any given time. Using a simple color-coding scheme, labeled plus-ends from two sequential images are pseudocolored with different colors and then merged to display color-coded ends. Based on object recognition algorithms, these colored ends can be identified and segregated into dynamic groups corresponding to four events, including growth, rescue, catastrophe, and pause. Further analysis yields not only their spatial distribution throughout the cell but also provides measurements such as growth rate and direction for each labeled end. We have validated the method by comparing our results with ground-truth data derived from manual analysis as well as with data obtained using the tracking method. In addition, we have confirmed color-coded representation of different dynamic events by analyzing their history and fate. Finally, we have demonstrated the use of the method to investigate microtubule assembly in cells and provided guidance in selecting optimal image acquisition conditions. Thus, this simple computer vision method offers a unique and quantitative approach to study spatial regulation of microtubule dynamics in cells. PMID:23226282

  20. Supercomputer description of human lung morphology for imaging analysis.

    PubMed

    Martonen, T B; Hwang, D; Guan, X; Fleming, J S

    1998-04-01

    A supercomputer code that describes the three-dimensional branching structure of the human lung has been developed. The algorithm was written for the Cray C94. In our simulations, the human lung was divided into a matrix containing discrete volumes (voxels) so as to be compatible with analyses of SPECT images. The matrix has 3840 voxels. The matrix can be segmented into transverse, sagittal and coronal layers analogous to human subject examinations. The compositions of individual voxels were identified by the type and respective number of airways present. The code provides a mapping of the spatial positions of the almost 17 million airways in human lungs and unambiguously assigns each airway to a voxel. Thus, the clinician and research scientist in the medical arena have a powerful new tool to be used in imaging analyses. The code was designed to be integrated into diverse applications, including the interpretation of SPECT images, the design of inhalation exposure experiments and the targeted delivery of inhaled pharmacologic drugs.

  1. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.

  2. Carbon Nanostructure Examined by Lattice Fringe Analysis of High Resolution Transmission Electron Microscopy Images

    NASA Technical Reports Server (NTRS)

    VanderWal, Randy L.; Tomasek, Aaron J.; Street, Kenneth; Thompson, William K.

    2002-01-01

    The dimensions of graphitic layer planes directly affect the reactivity of soot towards oxidation and growth. Quantification of graphitic structure could be used to develop and test correlations between the soot nanostructure and its reactivity. Based upon transmission electron microscopy images, this paper provides a demonstration of the robustness of a fringe image analysis code for determining the level of graphitic structure within nanoscale carbon, i.e. soot. Results, in the form of histograms of graphitic layer plane lengths, are compared to their determination through Raman analysis.

  3. Carbon Nanostructure Examined by Lattice Fringe Analysis of High Resolution Transmission Electron Microscopy Images

    NASA Technical Reports Server (NTRS)

    VanderWal, Randy L.; Tomasek, Aaron J.; Street, Kenneth; Thompson, William K.; Hull, David R.

    2003-01-01

    The dimensions of graphitic layer planes directly affect the reactivity of soot towards oxidation and growth. Quantification of graphitic structure could be used to develop and test correlations between the soot nanostructure and its reactivity. Based upon transmission electron microscopy images, this paper provides a demonstration of the robustness of a fringe image analysis code for determining the level of graphitic structure within nanoscale carbon, i.e., soot. Results, in the form of histograms of graphitic layer plane lengths, are compared to their determination through Raman analysis.

  4. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  5. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  6. Towards an Analysis of Visual Images in School Science Textbooks and Press Articles about Science and Technology.

    ERIC Educational Resources Information Center

    Dimopoulos, Kostas; Koulaidis, Vasilis; Sklaveniti, Spyridoula

    2003-01-01

    Analyzes the pedagogic functions of visual images included in school science textbooks and daily press articles about science and technology. Indicates that the science textbooks (a) use 10 times more images, (b) use more images so as to familiarize their readers with the specialized techno-scientific content and codes, and (c) tend to create a…

  7. X-Ray Phase Imaging for Breast Cancer Detection

    DTIC Science & Technology

    2012-09-01

    the Gerchberg-Saxton algorithm in the Fresnel diffraction regime, and is much more robust against image noise than the TIE-based method. For details...developed efficient coding with the software modules for the image registration, flat-filed correction , and phase retrievals. In addition, we...X, Liu H. 2010. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging

  8. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  9. An embedded barcode for "connected" malaria rapid diagnostic tests.

    PubMed

    Scherr, Thomas F; Gupta, Sparsh; Wright, David W; Haselton, Frederick R

    2017-03-29

    Many countries are shifting their efforts from malaria control to disease elimination. New technologies will be necessary to meet the more stringent demands of elimination campaigns, including improved quality control of malaria diagnostic tests, as well as an improved means for communicating test results among field healthcare workers, test manufacturers, and national ministries of health. In this report, we describe and evaluate an embedded barcode within standard rapid diagnostic tests as one potential solution. This information-augmented diagnostic test operates on the familiar principles of traditional lateral flow assays and simply replaces the control line with a control grid patterned in the shape of a QR (quick response) code. After the test is processed, the QR code appears on both positive or negative tests. In this report we demonstrate how this multipurpose code can be used not only to fulfill the control line role of test validation, but also to embed test manufacturing details, serve as a trigger for image capture, enable registration for image analysis, and correct for lighting effects. An accompanying mobile phone application automatically captures an image of the test when the QR code is recognized, decodes the QR code, performs image processing to determine the concentration of the malarial biomarker histidine-rich protein 2 at the test line, and transmits the test results and QR code payload to a secure web portal. This approach blends automated, sub-nanomolar biomarker detection, with near real-time reporting to provide quality assurance data that will help to achieve malaria elimination.

  10. A Picture is Worth 1,000 Words. The Use of Clinical Images in Electronic Medical Records.

    PubMed

    Ai, Angela C; Maloney, Francine L; Hickman, Thu-Trang; Wilcox, Allison R; Ramelson, Harley; Wright, Adam

    2017-07-12

    To understand how clinicians utilize image uploading tools in a home grown electronic health records (EHR) system. A content analysis of patient notes containing non-radiological images from the EHR was conducted. Images from 4,000 random notes from July 1, 2009 - June 30, 2010 were reviewed and manually coded. Codes were assigned to four properties of the image: (1) image type, (2) role of image uploader (e.g. MD, NP, PA, RN), (3) practice type (e.g. internal medicine, dermatology, ophthalmology), and (4) image subject. 3,815 images from image-containing notes stored in the EHR were reviewed and manually coded. Of those images, 32.8% were clinical and 66.2% were non-clinical. The most common types of the clinical images were photographs (38.0%), diagrams (19.1%), and scanned documents (14.4%). MDs uploaded 67.9% of clinical images, followed by RNs with 10.2%, and genetic counselors with 6.8%. Dermatology (34.9%), ophthalmology (16.1%), and general surgery (10.8%) uploaded the most clinical images. The content of clinical images referencing body parts varied, with 49.8% of those images focusing on the head and neck region, 15.3% focusing on the thorax, and 13.8% focusing on the lower extremities. The diversity of image types, content, and uploaders within a home grown EHR system reflected the versatility and importance of the image uploading tool. Understanding how users utilize image uploading tools in a clinical setting highlights important considerations for designing better EHR tools and the importance of interoperability between EHR systems and other health technology.

  11. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  12. 'Strong is the new skinny': A content analysis of #fitspiration images on Instagram.

    PubMed

    Tiggemann, Marika; Zaccardo, Mia

    2018-07-01

    'Fitspiration' is an online trend designed to inspire viewers towards a healthier lifestyle by promoting exercise and healthy food. This study provides a content analysis of fitspiration imagery on the social networking site Instagram. A set of 600 images were coded for body type, activity, objectification and textual elements. Results showed that the majority of images of women contained only one body type: thin and toned. In addition, most images contained objectifying elements. Accordingly, while fitspiration images may be inspirational for viewers, they also contain a number of elements likely to have negative effects on the viewer's body image.

  13. Fringe image processing based on structured light series

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Li, Hongyan

    2009-11-01

    The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.

  14. SDL: Saliency-Based Dictionary Learning Framework for Image Similarity.

    PubMed

    Sarkar, Rituparna; Acton, Scott T

    2018-02-01

    In image classification, obtaining adequate data to learn a robust classifier has often proven to be difficult in several scenarios. Classification of histological tissue images for health care analysis is a notable application in this context due to the necessity of surgery, biopsy or autopsy. To adequately exploit limited training data in classification, we propose a saliency guided dictionary learning method and subsequently an image similarity technique for histo-pathological image classification. Salient object detection from images aids in the identification of discriminative image features. We leverage the saliency values for the local image regions to learn a dictionary and respective sparse codes for an image, such that the more salient features are reconstructed with smaller error. The dictionary learned from an image gives a compact representation of the image itself and is capable of representing images with similar content, with comparable sparse codes. We employ this idea to design a similarity measure between a pair of images, where local image features of one image, are encoded with the dictionary learned from the other and vice versa. To effectively utilize the learned dictionary, we take into account the contribution of each dictionary atom in the sparse codes to generate a global image representation for image comparison. The efficacy of the proposed method was evaluated using three tissue data sets that consist of mammalian kidney, lung and spleen tissue, breast cancer, and colon cancer tissue images. From the experiments, we observe that our methods outperform the state of the art with an increase of 14.2% in the average classification accuracy over all data sets.

  15. Image Tracing: An Analysis of Its Effectiveness in Children's Pictorial Discrimination Learning

    ERIC Educational Resources Information Center

    Levin, Joel R.; And Others

    1977-01-01

    A total of 45 fifth grade students were the subjects of an experiment offering support for a component of learning strategy (memory imagery). Various theoretical explanations of the image-tracing phenomenon are considered, including depth of processing, dual coding and frequency. (MS)

  16. A method for determining electrophoretic and electroosmotic mobilities using AC and DC electric field particle displacements.

    PubMed

    Oddy, M H; Santiago, J G

    2004-01-01

    We have developed a method for measuring the electrophoretic mobility of submicrometer, fluorescently labeled particles and the electroosmotic mobility of a microchannel. We derive explicit expressions for the unknown electrophoretic and the electroosmotic mobilities as a function of particle displacements resulting from alternating current (AC) and direct current (DC) applied electric fields. Images of particle displacements are captured using an epifluorescent microscope and a CCD camera. A custom image-processing code was developed to determine image streak lengths associated with AC measurements, and a custom particle tracking velocimetry (PTV) code was devised to determine DC particle displacements. Statistical analysis was applied to relate mobility estimates to measured particle displacement distributions.

  17. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  18. Toward a standard reference database for computer-aided mammography

    NASA Astrophysics Data System (ADS)

    Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.

    2008-03-01

    Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).

  19. Verification of the FBR fuel bundle-duct interaction analysis code BAMBOO by the out-of-pile bundle compression test with large diameter pins

    NASA Astrophysics Data System (ADS)

    Uwaba, Tomoyuki; Ito, Masahiro; Nemoto, Junichi; Ichikawa, Shoichi; Katsuyama, Kozo

    2014-09-01

    The BAMBOO computer code was verified by results for the out-of-pile bundle compression test with large diameter pin bundle deformation under the bundle-duct interaction (BDI) condition. The pin diameters of the examined test bundles were 8.5 mm and 10.4 mm, which are targeted as preliminary fuel pin diameters for the upgraded core of the prototype fast breeder reactor (FBR) and for demonstration and commercial FBRs studied in the FaCT project. In the bundle compression test, bundle cross-sectional views were obtained from X-ray computer tomography (CT) images and local parameters of bundle deformation such as pin-to-duct and pin-to-pin clearances were measured by CT image analyses. In the verification, calculation results of bundle deformation obtained by the BAMBOO code analyses were compared with the experimental results from the CT image analyses. The comparison showed that the BAMBOO code reasonably predicts deformation of large diameter pin bundles under the BDI condition by assuming that pin bowing and cladding oval distortion are the major deformation mechanisms, the same as in the case of small diameter pin bundles. In addition, the BAMBOO analysis results confirmed that cladding oval distortion effectively suppresses BDI in large diameter pin bundles as well as in small diameter pin bundles.

  20. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  1. A simple program to measure and analyse tree rings using Excel, R and SigmaScan

    PubMed Central

    Hietz, Peter

    2011-01-01

    I present a new software that links a program for image analysis (SigmaScan), one for spreadsheets (Excel) and one for statistical analysis (R) for applications of tree-ring analysis. The first macro measures ring width marked by the user on scanned images, stores raw and detrended data in Excel and calculates the distance to the pith and inter-series correlations. A second macro measures darkness along a defined path to identify latewood–earlywood transition in conifers, and a third shows the potential for automatic detection of boundaries. Written in Visual Basic for Applications, the code makes use of the advantages of existing programs and is consequently very economic and relatively simple to adjust to the requirements of specific projects or to expand making use of already available code. PMID:26109835

  2. A simple program to measure and analyse tree rings using Excel, R and SigmaScan.

    PubMed

    Hietz, Peter

    I present a new software that links a program for image analysis (SigmaScan), one for spreadsheets (Excel) and one for statistical analysis (R) for applications of tree-ring analysis. The first macro measures ring width marked by the user on scanned images, stores raw and detrended data in Excel and calculates the distance to the pith and inter-series correlations. A second macro measures darkness along a defined path to identify latewood-earlywood transition in conifers, and a third shows the potential for automatic detection of boundaries. Written in Visual Basic for Applications, the code makes use of the advantages of existing programs and is consequently very economic and relatively simple to adjust to the requirements of specific projects or to expand making use of already available code.

  3. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  4. CAFNA{reg{underscore}sign}, coded aperture fast neutron analysis for contraband detection: Preliminary results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L.; Lanza, R.C.

    1999-12-01

    The authors have developed a near field coded aperture imaging system for use with fast neutron techniques as a tool for the detection of contraband and hidden explosives through nuclear elemental analysis. The technique relies on the prompt gamma rays produced by fast neutron interactions with the object being examined. The position of the nuclear elements is determined by the location of the gamma emitters. For existing fast neutron techniques, in Pulsed Fast Neutron Analysis (PFNA), neutrons are used with very low efficiency; in Fast Neutron Analysis (FNS), the sensitivity for detection of the signature gamma rays is very low.more » For the Coded Aperture Fast Neutron Analysis (CAFNA{reg{underscore}sign}) the authors have developed, the efficiency for both using the probing fast neutrons and detecting the prompt gamma rays is high. For a probed volume of n{sup 3} volume elements (voxels) in a cube of n resolution elements on a side, they can compare the sensitivity with other neutron probing techniques. As compared to PFNA, the improvement for neutron utilization is n{sup 2}, where the total number of voxels in the object being examined is n{sup 3}. Compared to FNA, the improvement for gamma-ray imaging is proportional to the total open area of the coded aperture plane; a typical value is n{sup 2}/2, where n{sup 2} is the number of total detector resolution elements or the number of pixels in an object layer. It should be noted that the actual signal to noise ratio of a system depends also on the nature and distribution of background events and this comparison may reduce somewhat the effective sensitivity of CAFNA. They have performed analysis, Monte Carlo simulations, and preliminary experiments using low and high energy gamma-ray sources. The results show that a high sensitivity 3-D contraband imaging and detection system can be realized by using CAFNA.« less

  5. Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.

    PubMed

    Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf

    2016-01-01

    One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.

  6. Information retrieval based on single-pixel optical imaging with quick-response code

    NASA Astrophysics Data System (ADS)

    Xiao, Yin; Chen, Wen

    2018-04-01

    Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.

  7. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    PubMed

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  8. Changing Utilization of Noninvasive Diagnostic Imaging Over 2 Decades: An Examination Family-Focused Analysis of Medicare Claims Using the Neiman Imaging Types of Service Categorization System.

    PubMed

    Rosman, David A; Duszak, Richard; Wang, Wenyi; Hughes, Danny R; Rosenkrantz, Andrew B

    2018-02-01

    The objective of our study was to use a new modality and body region categorization system to assess changing utilization of noninvasive diagnostic imaging in the Medicare fee-for-service population over a recent 20-year period (1994-2013). All Medicare Part B Physician Fee Schedule services billed between 1994 and 2013 were identified using Physician/Supplier Procedure Summary master files. Billed codes for diagnostic imaging were classified using the Neiman Imaging Types of Service (NITOS) coding system by both modality and body region. Utilization rates per 1000 beneficiaries were calculated for families of services. Among all diagnostic imaging modalities, growth was greatest for MRI (+312%) and CT (+151%) and was lower for ultrasound, nuclear medicine, and radiography and fluoroscopy (range, +1% to +31%). Among body regions, service growth was greatest for brain (+126%) and spine (+74%) imaging; showed milder growth (range, +18% to +67%) for imaging of the head and neck, breast, abdomen and pelvis, and extremity; and showed slight declines (range, -2% to -7%) for cardiac and chest imaging overall. The following specific imaging service families showed massive (> +100%) growth: cardiac CT, cardiac MRI, and breast MRI. NITOS categorization permits identification of temporal shifts in noninvasive diagnostic imaging by specific modality- and region-focused families, providing a granular understanding and reproducible analysis of global changes in imaging overall. Service family-level perspectives may help inform ongoing policy efforts to optimize imaging utilization and appropriateness.

  9. Fast ITTBC using pattern code on subband segmentation

    NASA Astrophysics Data System (ADS)

    Koh, Sung S.; Kim, Hanchil; Lee, Kooyoung; Kim, Hongbin; Jeong, Hun; Cho, Gangseok; Kim, Chunghwa

    2000-06-01

    Iterated Transformation Theory-Based Coding suffers from very high computational complexity in encoding phase. This is due to its exhaustive search. In this paper, our proposed image coding algorithm preprocess an original image to subband segmentation image by wavelet transform before image coding to reduce encoding complexity. A similar block is searched by using the 24 block pattern codes which are coded by the edge information in the image block on the domain pool of the subband segmentation. As a result, numerical data shows that the encoding time of the proposed coding method can be reduced to 98.82% of that of Joaquin's method, while the loss in quality relative to the Jacquin's is about 0.28 dB in PSNR, which is visually negligible.

  10. Experimental study of an off-axis three mirror anastigmatic system with wavefront coding technology.

    PubMed

    Yan, Feng; Tao, Xiaoping

    2012-04-10

    Wavefront coding (WFC) is a kind of computational imaging technique that controls defocus and defocus related aberrations of optical systems by introducing a specially designed phase distribution to the pupil function. This technology has been applied in many imaging systems to improve performance and/or reduce cost. The application of WFC technology in an off-axis three mirror anastigmatic (TMA) system has been proposed, and the design and optimization of optics, the restoration of degraded images, and the manufacturing of wavefront coded elements have been researched in our previous work. In this paper, we describe the alignment, the imaging experiment, and the image restoration of the off-axis TMA system with WFC technology. The ideal wavefront map is set to be the system error of the interferometer to simplify the assembly, and the coefficients of certain Zernike polynomials are monitored to verify the result in the alignment process. A pinhole of 20 μm diameter and the third plate of WT1005-62 resolution patterns are selected as the targets in the imaging experiment. The comparison of the tail lengths of point spread functions is represented to show the invariance of the image quality in the extended depth of focus. The structure similarity is applied to estimate the relationship among the captured images with varying defocus. We conclude that the experiment results agree with the earlier theoretical analysis.

  11. Methods of evaluating the effects of coding on SAR data

    NASA Technical Reports Server (NTRS)

    Dutkiewicz, Melanie; Cumming, Ian

    1993-01-01

    It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important.

  12. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  13. Classification of breast tissue in mammograms using efficient coding.

    PubMed

    Costa, Daniel D; Campos, Lúcio F; Barros, Allan K

    2011-06-24

    Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.

  14. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    NASA Astrophysics Data System (ADS)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new combined coding method.

  15. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  16. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  17. Supervised graph hashing for histopathology image retrieval and classification.

    PubMed

    Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin

    2017-12-01

    In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Retina Image Screening and Analysis Software Version 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Aykac, Deniz

    2009-04-01

    The software allows physicians or researchers to ground-truth images of retinas, identifying key physiological features and lesions that are indicative of disease. The software features methods to automatically detect the physiological features and lesions. The software contains code to measure the quality of images received from a telemedicine network; create and populate a database for a telemedicine network; review and report the diagnosis of a set of images; and also contains components to transmit images from a Zeiss camera to the network through SFTP.

  19. Selective object encryption for privacy protection

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Panetta, Karen; Cherukuri, Ravindranath; Agaian, Sos

    2009-05-01

    This paper introduces a new recursive sequence called the truncated P-Fibonacci sequence, its corresponding binary code called the truncated Fibonacci p-code and a new bit-plane decomposition method using the truncated Fibonacci pcode. In addition, a new lossless image encryption algorithm is presented that can encrypt a selected object using this new decomposition method for privacy protection. The user has the flexibility (1) to define the object to be protected as an object in an image or in a specific part of the image, a selected region of an image, or an entire image, (2) to utilize any new or existing method for edge detection or segmentation to extract the selected object from an image or a specific part/region of the image, (3) to select any new or existing method for the shuffling process. The algorithm can be used in many different areas such as wireless networking, mobile phone services and applications in homeland security and medical imaging. Simulation results and analysis verify that the algorithm shows good performance in object/image encryption and can withstand plaintext attacks.

  20. a Virtual Trip to the Schwarzschild-De Sitter Black Hole

    NASA Astrophysics Data System (ADS)

    Bakala, Pavel; Hledík, Stanislav; Stuchlík, Zdenĕk; Truparová, Kamila; Čermák, Petr

    2008-09-01

    We developed realistic fully general relativistic computer code for simulation of optical projection in a strong, spherically symmetric gravitational field. Standard theoretical analysis of optical projection for an observer in the vicinity of a Schwarzschild black hole is extended to black hole spacetimes with a repulsive cosmological constant, i.e, Schwarzschild-de Sitter (SdS) spacetimes. Influence of the cosmological constant is investigated for static observers and observers radially free-falling from static radius. Simulation includes effects of gravitational lensing, multiple images, Doppler and gravitational frequency shift, as well as the amplification of intensity. The code generates images of static observers sky and a movie simulations for radially free-falling observers. Techniques of parallel programming are applied to get high performance and fast run of the simulation code.

  1. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  2. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  3. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    PubMed

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  4. Document image retrieval through word shape coding.

    PubMed

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  5. Volume 19, Issue8 (December 2004)Articles in the Current Issue:Research ArticleTowards automation of palynology 1: analysis of pollen shape and ornamentation using simple geometric measures, derived from scanning electron microscope images

    NASA Astrophysics Data System (ADS)

    Treloar, W. J.; Taylor, G. E.; Flenley, J. R.

    2004-12-01

    This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright

  6. Pseudo color ghost coding imaging with pseudo thermal light

    NASA Astrophysics Data System (ADS)

    Duan, De-yang; Xia, Yun-jie

    2018-04-01

    We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.

  7. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  8. DIC Challenge: Developing Images and Guidelines for Evaluating Accuracy and Resolution of 2D Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.; Toussaint, E.; Jones, Elizabeth M. C.

    With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approachesmore » and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.« less

  9. DIC Challenge: Developing Images and Guidelines for Evaluating Accuracy and Resolution of 2D Analyses

    DOE PAGES

    Reu, Phillip L.; Toussaint, E.; Jones, Elizabeth M. C.; ...

    2017-12-11

    With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approachesmore » and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.« less

  10. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  11. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  12. Unique identification code for medical fundus images using blood vessel pattern for tele-ophthalmology applications.

    PubMed

    Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar

    2016-10-01

    Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    NASA Astrophysics Data System (ADS)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  14. Color-coded fluid-attenuated inversion recovery images improve inter-rater reliability of fluid-attenuated inversion recovery signal changes within acute diffusion-weighted image lesions.

    PubMed

    Kim, Bum Joon; Kim, Yong-Hwan; Kim, Yeon-Jung; Ahn, Sung Ho; Lee, Deok Hee; Kwon, Sun U; Kim, Sang Joon; Kim, Jong S; Kang, Dong-Wha

    2014-09-01

    Diffusion-weighted image fluid-attenuated inversion recovery (FLAIR) mismatch has been considered to represent ischemic lesion age. However, the inter-rater agreement of diffusion-weighted image FLAIR mismatch is low. We hypothesized that color-coded images would increase its inter-rater agreement. Patients with ischemic stroke <24 hours of a clear onset were retrospectively studied. FLAIR signal change was rated as negative, subtle, or obvious on conventional and color-coded FLAIR images based on visual inspection. Inter-rater agreement was evaluated using κ and percent agreement. The predictive value of diffusion-weighted image FLAIR mismatch for identification of patients <4.5 hours of symptom onset was evaluated. One hundred and thirteen patients were enrolled. The inter-rater agreement of FLAIR signal change improved from 69.9% (k=0.538) with conventional images to 85.8% (k=0.754) with color-coded images (P=0.004). Discrepantly rated patients on conventional, but not on color-coded images, had a higher prevalence of cardioembolic stroke (P=0.02) and cortical infarction (P=0.04). The positive predictive value for patients <4.5 hours of onset was 85.3% and 71.9% with conventional and 95.7% and 82.1% with color-coded images, by each rater. Color-coded FLAIR images increased the inter-rater agreement of diffusion-weighted image FLAIR recovery mismatch and may ultimately help identify unknown-onset stroke patients appropriate for thrombolysis. © 2014 American Heart Association, Inc.

  15. Ultrasound strain imaging using Barker code

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  16. The NJOY Nuclear Data Processing System, Version 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.

    The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.

  17. Automatic morphological classification of galaxy images

    PubMed Central

    Shamir, Lior

    2009-01-01

    We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594

  18. Study on the properties of infrared wavefront coding athermal system under several typical temperature gradient distributions

    NASA Astrophysics Data System (ADS)

    Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua

    2018-01-01

    Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.

  19. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  20. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  1. Binary image encryption in a joint transform correlator scheme by aid of run-length encoding and QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong

    2018-07-01

    We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.

  2. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    NASA Astrophysics Data System (ADS)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  3. Hello World Deep Learning in Medical Imaging.

    PubMed

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  4. Medial orbitofrontal cortex codes relative rather than absolute value of financial rewards in humans.

    PubMed

    Elliott, R; Agnew, Z; Deakin, J F W

    2008-05-01

    Functional imaging studies in recent years have confirmed the involvement of orbitofrontal cortex (OFC) in human reward processing and have suggested that OFC responses are context-dependent. A seminal electrophysiological experiment in primates taught animals to associate abstract visual stimuli with differently valuable food rewards. Subsequently, pairs of these learned abstract stimuli were presented and firing of OFC neurons to the medium-value stimulus was measured. OFC firing was shown to depend on the relative value context. In this study, we developed a human analogue of this paradigm and scanned subjects using functional magnetic resonance imaging. The analysis compared neuronal responses to two superficially identical events, which differed only in terms of the preceding context. Medial OFC response to the same perceptual stimulus was greater when the stimulus predicted the more valuable of two rewards than when it predicted the less valuable. Additional responses were observed in other components of reward circuitry, the amygdala and ventral striatum. The central finding is consistent with the primate results and suggests that OFC neurons code relative rather than absolute reward value. Amygdala and striatal involvement in coding reward value is also consistent with recent functional imaging data. By using a simpler and less confounded paradigm than many functional imaging studies, we are able to demonstrate that relative financial reward value per se is coded in distinct subregions of an extended reward and decision-making network.

  5. Recursive time-varying filter banks for subband image coding

    NASA Technical Reports Server (NTRS)

    Smith, Mark J. T.; Chung, Wilson C.

    1992-01-01

    Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.

  6. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    NASA Astrophysics Data System (ADS)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  7. The Images and Emotions of Bilingual Chinese Readers: A Dual Coding Analysis.

    ERIC Educational Resources Information Center

    Steffensen, Margaret S.; Goetz, Ernest T.; Cheng, Xiaoguang

    1999-01-01

    Investigates the nonverbal aspects of bilingual reading with 24 Chinese students who rated text segments for strength of imagery and emotional response. Provides insights into how the bilingual mind accomplishes the task of transforming images on a page into a message that allows the reader to enter and live in a created world. (NH)

  8. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  9. Three-dimensional integral imaging displays using a quick-response encoded elemental image array: an overview

    NASA Astrophysics Data System (ADS)

    Markman, A.; Javidi, B.

    2016-06-01

    Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.

  10. Accuracy assessment and characterization of x-ray coded aperture coherent scatter spectral imaging for breast cancer classification

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2017-01-01

    Abstract. Although transmission-based x-ray imaging is the most commonly used imaging approach for breast cancer detection, it exhibits false negative rates higher than 15%. To improve cancer detection accuracy, x-ray coherent scatter computed tomography (CSCT) has been explored to potentially detect cancer with greater consistency. However, the 10-min scan duration of CSCT limits its possible clinical applications. The coded aperture coherent scatter spectral imaging (CACSSI) technique has been shown to reduce scan time through enabling single-angle imaging while providing high detection accuracy. Here, we use Monte Carlo simulations to test analytical optimization studies of the CACSSI technique, specifically for detecting cancer in ex vivo breast samples. An anthropomorphic breast tissue phantom was modeled, a CACSSI imaging system was virtually simulated to image the phantom, a diagnostic voxel classification algorithm was applied to all reconstructed voxels in the phantom, and receiver-operator characteristics analysis of the voxel classification was used to evaluate and characterize the imaging system for a range of parameters that have been optimized in a prior analytical study. The results indicate that CACSSI is able to identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) in tissue samples with a cancerous voxel identification area-under-the-curve of 0.94 through a scan lasting less than 10 s per slice. These results show that coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue within ex vivo samples. Furthermore, the results indicate potential CACSSI imaging system configurations for implementation in subsequent imaging development studies. PMID:28331884

  11. Analytic programming with FMRI data: a quick-start guide for statisticians using R.

    PubMed

    Eloyan, Ani; Li, Shanshan; Muschelli, John; Pekar, Jim J; Mostofsky, Stewart H; Caffo, Brian S

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is a thriving field that plays an important role in medical imaging analysis, biological and neuroscience research and practice. This manuscript gives a didactic introduction to the statistical analysis of fMRI data using the R project, along with the relevant R code. The goal is to give statisticians who would like to pursue research in this area a quick tutorial for programming with fMRI data. References of relevant packages and papers are provided for those interested in more advanced analysis.

  12. Invited Article: Relation between electric and magnetic field structures and their proton-beam images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kugland, N. L.; Ryutov, D. D.; Plechaty, C.

    2012-10-15

    Proton imaging is commonly used to reveal the electric and magnetic fields that are found in high energy density plasmas. Presented here is an analysis of this technique that is directed towards developing additional insight into the underlying physics. This approach considers: formation of images in the limits of weak and strong intensity variations; caustic formation and structure; image inversion to obtain line-integrated field characteristics; direct relations between images and electric or magnetic field structures in a plasma; imaging of sharp features such as Debye sheaths and shocks. Limitations on spatial and temporal resolution are assessed, and similarities with opticalmore » shadowgraphy are noted. Synthetic proton images are presented to illustrate the analysis. These results will be useful for quantitatively analyzing experimental proton imaging data and verifying numerical codes.« less

  13. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    PubMed

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  14. MicroCT parameters for multimaterial elements assessment

    NASA Astrophysics Data System (ADS)

    de Araújo, Olga M. O.; Silva Bastos, Jaqueline; Machado, Alessandra S.; dos Santos, Thaís M. P.; Ferreira, Cintia G.; Rosifini Alves Claro, Ana Paula; Lopes, Ricardo T.

    2018-03-01

    Microtomography is a non-destructive testing technique for quantitative and qualitative analysis. The investigation of multimaterial elements with great difference of density can result in artifacts that degrade image quality depending on combination of additional filter. The aim of this study is the selection of parameters most appropriate for analysis of bone tissue with metallic implant. The results show the simulation with MCNPX code for the distribution of energy without additional filter, with use of aluminum, copper and brass filters and their respective reconstructed images showing the importance of the choice of these parameters in image acquisition process on computed microtomography.

  15. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  16. Optimization of sparse synthetic transmit aperture imaging with coded excitation and frequency division.

    PubMed

    Behar, Vera; Adam, Dan

    2005-12-01

    An effective aperture approach is used for optimization of a sparse synthetic transmit aperture (STA) imaging system with coded excitation and frequency division. A new two-stage algorithm is proposed for optimization of both the positions of the transmit elements and the weights of the receive elements. In order to increase the signal-to-noise ratio in a synthetic aperture system, temporal encoding of the excitation signals is employed. When comparing the excitation by linear frequency modulation (LFM) signals and phase shift key modulation (PSKM) signals, the analysis shows that chirps are better for excitation, since at the output of a compression filter the sidelobes generated are much smaller than those produced by the binary PSKM signals. Here, an implementation of a fast STA imaging is studied by spatial encoding with frequency division of the LFM signals. The proposed system employs a 64-element array with only four active elements used during transmit. The two-dimensional point spread function (PSF) produced by such a sparse STA system is compared to the PSF produced by an equivalent phased array system, using the Field II simulation program. The analysis demonstrates the superiority of the new sparse STA imaging system while using coded excitation and frequency division. Compared to a conventional phased array imaging system, this system acquires images of equivalent quality 60 times faster, when the transmit elements are fired in pairs consecutively and the power level used during transmit is very low. The fastest acquisition time is achieved when all transmit elements are fired simultaneously, which improves detectability, but at the cost of a slight degradation of the axial resolution. In real-time implementation, however, it must be borne in mind that the frame rate of a STA imaging system depends not only on the acquisition time of the data but also on the processing time needed for image reconstruction. Comparing to phased array imaging, a significant increase in the frame rate of a STA imaging system is possible if and only if an equivalent time efficient algorithm is used for image reconstruction.

  17. Imaging and image restoration of an on-axis three-mirror Cassegrain system with wavefront coding technology.

    PubMed

    Guo, Xiaohu; Dong, Liquan; Zhao, Yuejin; Jia, Wei; Kong, Lingqin; Wu, Yijian; Li, Bing

    2015-04-01

    Wavefront coding (WFC) technology is adopted in the space optical system to resolve the problem of defocus caused by temperature difference or vibration of satellite motion. According to the theory of WFC, we calculate and optimize the phase mask parameter of the cubic phase mask plate, which is used in an on-axis three-mirror Cassegrain (TMC) telescope system. The simulation analysis and the experimental results indicate that the defocused modulation transfer function curves and the corresponding blurred images have a perfect consistency in the range of 10 times the depth of focus (DOF) of the original TMC system. After digital image processing by a Wiener filter, the spatial resolution of the restored images is up to 57.14 line pairs/mm. The results demonstrate that the WFC technology in the TMC system has superior performance in extending the DOF and less sensitivity to defocus, which has great value in resolving the problem of defocus in the space optical system.

  18. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  19. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design

  20. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  1. Side information in coded aperture compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.

    2017-02-01

    Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.

  2. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  3. The design of wavefront coded imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Shun; Cen, Zhaofeng; Li, Xiaotong

    2016-10-01

    Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.

  4. Coded Excitation Plane Wave Imaging for Shear Wave Motion Detection

    PubMed Central

    Song, Pengfei; Urban, Matthew W.; Manduca, Armando; Greenleaf, James F.; Chen, Shigao

    2015-01-01

    Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave signal-to-noise-ratio (SNR) compared to conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2-4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (Body Mass Index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue. PMID:26168181

  5. Research on compressive sensing reconstruction algorithm based on total variation model

    NASA Astrophysics Data System (ADS)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  6. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  7. A Spherical Active Coded Aperture for 4π Gamma-ray Imaging

    DOE PAGES

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...

    2017-09-22

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  8. QR code based noise-free optical encryption and decryption of a gray scale image

    NASA Astrophysics Data System (ADS)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  9. Women Out of View. An Analysis of Female Characters on 1987-88 TV Programs.

    ERIC Educational Resources Information Center

    Steenland, Sally; Whittemore, Lauren

    This study of the images of women as portrayed on new television programs in 1987-88 not only compared them with the images of the last season, but examined the similarities and differences between these characters and real life women. Each continuing female character on every new show was coded for race, age, occupation, marital and socioeconomic…

  10. Coded mask telescopes for X-ray astronomy

    NASA Astrophysics Data System (ADS)

    Skinner, G. K.; Ponman, T. J.

    1987-04-01

    The principle of the coded mask techniques are discussed together with the methods of image reconstruction. The coded mask telescopes built at the University of Birmingham, including the SL 1501 coded mask X-ray telescope flown on the Skylark rocket and the Coded Mask Imaging Spectrometer (COMIS) projected for the Soviet space station Mir, are described. A diagram of a coded mask telescope and some designs for coded masks are included.

  11. 3D-HST WFC3-selected Photometric Catalogs in the Five CANDELS/3D-HST Fields: Photometry, Photometric Redshifts, and Stellar Masses

    NASA Astrophysics Data System (ADS)

    Skelton, Rosalind E.; Whitaker, Katherine E.; Momcheva, Ivelina G.; Brammer, Gabriel B.; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; van der Wel, Arjen; Bezanson, Rachel; Da Cunha, Elisabete; Fumagalli, Mattia; Förster Schreiber, Natascha; Kriek, Mariska; Leja, Joel; Lundgren, Britt F.; Magee, Daniel; Marchesini, Danilo; Maseda, Michael V.; Nelson, Erica J.; Oesch, Pascal; Pacifici, Camilla; Patel, Shannon G.; Price, Sedona; Rix, Hans-Walter; Tal, Tomer; Wake, David A.; Wuyts, Stijn

    2014-10-01

    The 3D-HST and CANDELS programs have provided WFC3 and ACS spectroscopy and photometry over ≈900 arcmin2 in five fields: AEGIS, COSMOS, GOODS-North, GOODS-South, and the UKIDSS UDS field. All these fields have a wealth of publicly available imaging data sets in addition to the Hubble Space Telescope (HST) data, which makes it possible to construct the spectral energy distributions (SEDs) of objects over a wide wavelength range. In this paper we describe a photometric analysis of the CANDELS and 3D-HST HST imaging and the ancillary imaging data at wavelengths 0.3-8 μm. Objects were selected in the WFC3 near-IR bands, and their SEDs were determined by carefully taking the effects of the point-spread function in each observation into account. A total of 147 distinct imaging data sets were used in the analysis. The photometry is made available in the form of six catalogs: one for each field, as well as a master catalog containing all objects in the entire survey. We also provide derived data products: photometric redshifts, determined with the EAZY code, and stellar population parameters determined with the FAST code. We make all the imaging data that were used in the analysis available, including our reductions of the WFC3 imaging in all five fields. 3D-HST is a spectroscopic survey with the WFC3 and ACS grisms, and the photometric catalogs presented here constitute a necessary first step in the analysis of these grism data. All the data presented in this paper are available through the 3D-HST Web site (http://3dhst.research.yale.edu).

  12. 3D-HST WFC3-SELECTED PHOTOMETRIC CATALOGS IN THE FIVE CANDELS/3D-HST FIELDS: PHOTOMETRY, PHOTOMETRIC REDSHIFTS, AND STELLAR MASSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skelton, Rosalind E.; Whitaker, Katherine E.; Momcheva, Ivelina G.

    The 3D-HST and CANDELS programs have provided WFC3 and ACS spectroscopy and photometry over ≈900 arcmin{sup 2} in five fields: AEGIS, COSMOS, GOODS-North, GOODS-South, and the UKIDSS UDS field. All these fields have a wealth of publicly available imaging data sets in addition to the Hubble Space Telescope (HST) data, which makes it possible to construct the spectral energy distributions (SEDs) of objects over a wide wavelength range. In this paper we describe a photometric analysis of the CANDELS and 3D-HST HST imaging and the ancillary imaging data at wavelengths 0.3-8 μm. Objects were selected in the WFC3 near-IR bands,more » and their SEDs were determined by carefully taking the effects of the point-spread function in each observation into account. A total of 147 distinct imaging data sets were used in the analysis. The photometry is made available in the form of six catalogs: one for each field, as well as a master catalog containing all objects in the entire survey. We also provide derived data products: photometric redshifts, determined with the EAZY code, and stellar population parameters determined with the FAST code. We make all the imaging data that were used in the analysis available, including our reductions of the WFC3 imaging in all five fields. 3D-HST is a spectroscopic survey with the WFC3 and ACS grisms, and the photometric catalogs presented here constitute a necessary first step in the analysis of these grism data. All the data presented in this paper are available through the 3D-HST Web site (http://3dhst.research.yale.edu)« less

  13. Effect of color coding and subtraction on the accuracy of contrast echocardiography

    NASA Technical Reports Server (NTRS)

    Pasquet, A.; Greenberg, N.; Brunken, R.; Thomas, J. D.; Marwick, T. H.

    1999-01-01

    BACKGROUND: Contrast echocardiography may be used to assess myocardial perfusion. However, gray scale assessment of myocardial contrast echocardiography (MCE) is difficult because of variations in regional backscatter intensity, difficulties in distinguishing varying shades of gray, and artifacts or attenuation. We sought to determine whether the assessment of rest myocardial perfusion by MCE could be improved with subtraction and color coding. METHODS AND RESULTS: MCE was performed in 31 patients with previous myocardial infarction with a 2nd generation agent (NC100100, Nycomed AS), using harmonic triggered or continuous imaging and gain settings were kept constant throughout the study. Digitized images were post processed by subtraction of baseline from contrast data and colorized to reflect the intensity of myocardial contrast. Gray scale MCE alone, MCE images combined with baseline and subtracted colorized images were scored independently using a 16 segment model. The presence and severity of myocardial contrast abnormalities were compared with perfusion defined by rest MIBI-SPECT. Segments that were not visualized by continuous (17%) or triggered imaging (14%) after color processing were excluded from further analysis. The specificity of gray scale MCE alone (56%) or MCE combined with baseline 2D (47%) was significantly enhanced by subtraction and color coding (76%, p<0.001) of triggered images. The accuracy of the gray scale approaches (respectively 52% and 47%) was increased to 70% (p<0.001). Similarly, for continuous images, the specificity of gray scale MCE with and without baseline comparison was 23% and 42% respectively, compared with 60% after post processing (p<0.001). The accuracy of colorized images (59%) was also significantly greater than gray scale MCE (43% and 29%, p<0.001). The sensitivity of MCE for both acquisitions was not altered by subtraction. CONCLUSION: Post-processing with subtraction and color coding significantly improves the accuracy and specificity of MCE for detection of perfusion defects.

  14. Digitizing zone maps, using modified LARSYS program. [computer graphics and computer techniques for mapping

    NASA Technical Reports Server (NTRS)

    Giddings, L.; Boston, S.

    1976-01-01

    A method for digitizing zone maps is presented, starting with colored images and producing a final one-channel digitized tape. This method automates the work previously done interactively on the Image-100 and Data Analysis System computers of the Johnson Space Center (JSC) Earth Observations Division (EOD). A color-coded map was digitized through color filters on a scanner to form a digital tape in LARSYS-2 or JSC Universal format. The taped image was classified by the EOD LARSYS program on the basis of training fields included in the image. Numerical values were assigned to all pixels in a given class, and the resulting coded zone map was written on a LARSYS or Universal tape. A unique spatial filter option permitted zones to be made homogeneous and edges of zones to be abrupt transitions from one zone to the next. A zoom option allowed the output image to have arbitrary dimensions in terms of number of lines and number of samples on a line. Printouts of the computer program are given and the images that were digitized are shown.

  15. Regulating alcohol advertising: content analysis of the adequacy of federal and self-regulation of magazine advertisements, 2008-2010.

    PubMed

    Smith, Katherine C; Cukier, Samantha; Jernigan, David H

    2014-10-01

    We analyzed beer, spirits, and alcopop magazine advertisements to determine adherence to federal and voluntary advertising standards. We assessed the efficacy of these standards in curtailing potentially damaging content and protecting public health. We obtained data from a content analysis of a census of 1795 unique advertising creatives for beer, spirits, and alcopops placed in nationally available magazines between 2008 and 2010. We coded creatives for manifest content and adherence to federal regulations and industry codes. Advertisements largely adhered to existing regulations and codes. We assessed only 23 ads as noncompliant with federal regulations and 38 with industry codes. Content consistent with the codes was, however, often culturally positive in terms of aspirational depictions. In addition, creatives included degrading and sexualized images, promoted risky behavior, and made health claims associated with low-calorie content. Existing codes and regulations are largely followed regarding content but do not adequately protect against content that promotes unhealthy and irresponsible consumption and degrades potentially vulnerable populations in its depictions. Our findings suggest further limitations and enhanced federal oversight may be necessary to protect public health.

  16. Hard x ray imaging graphics development and literature search

    NASA Technical Reports Server (NTRS)

    Emslie, A. Gordon

    1991-01-01

    This report presents work performed between June 1990 and June 1991 and has the following objectives: (1) a comprehensive literature search of imaging technology and coded aperture imaging as well as relevant topics relating to solar flares; (2) an analysis of random number generators; and (3) programming simulation models of hard x ray telescopes. All programs are compatible with NASA/MSFC Space Science LAboratory VAX Cluster and are written in VAX FORTRAN and VAX IDL (Interactive Data Language).

  17. An imaging method of wavefront coding system based on phase plate rotation

    NASA Astrophysics Data System (ADS)

    Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2018-01-01

    Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.

  18. Affordable Imaging Lab for Noninvasive Analysis of Biomass and Early Vigour in Cereal Crops

    PubMed Central

    2018-01-01

    Plant phenotyping by imaging allows automated analysis of plants for various morphological and physiological traits. In this work, we developed a low-cost RGB imaging phenotyping lab (LCP lab) for low-throughput imaging and analysis using affordable imaging equipment and freely available software. LCP lab comprising RGB imaging and analysis pipeline is set up and demonstrated with early vigour analysis in wheat. Using this lab, a few hundred pots can be photographed in a day and the pots are tracked with QR codes. The software pipeline for both imaging and analysis is built from freely available software. The LCP lab was evaluated for early vigour analysis of five wheat cultivars. A high coefficient of determination (R2 0.94) was obtained between the dry weight and the projected leaf area of 20-day-old wheat plants and R2 of 0.9 for the relative growth rate between 10 and 20 days of plant growth. Detailed description for setting up such a lab is provided together with custom scripts built for imaging and analysis. The LCP lab is an affordable alternative for analysis of cereal crops when access to a high-throughput phenotyping facility is unavailable or when the experiments require growing plants in highly controlled climate chambers. The protocols described in this work are useful for building affordable imaging system for small-scale research projects and for education. PMID:29850536

  19. Optical Performance Modeling of FUSE Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence systems or Legendre-Fourier polynomials for grazing incidence systems. It removes low order terms from the metrology data, calculates statistical ACV or PSD functions, and fits these data to OSAC models for the scatter analysis. In this paper we briefly describe the laboratory image testing of FUSE spare mirror performed in the near and vacuum ultraviolet at John Hopkins University and OSAC modeling of the test setup performed at NASA/GSFC. The test setup is a double-pass configuration consisting of a Hg discharge source, the FUSE off-axis parabolic mirror under test, an autocollimating flat mirror, and a tomographic imaging detector. Two additional, small fold flats are used in the optical train to accommodate the light source and the detector. The modeling is based on Zernike fitting and PSD analysis of surface metrology data measured by both the mirror vendor (Tinsley) and JHU. The results of our models agree well with the laboratory imaging data, thus validating our theoretical model. Finally, we predict the imaging performance of FUSE mirrors in their flight configuration at far-ultraviolet wavelengths.

  20. An edge preserving differential image coding scheme

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1992-01-01

    Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.

  1. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.

  2. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, D.; Kwatra, S. C.

    1992-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.

  3. Advanced Imaging Optics Utilizing Wavefront Coding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less

  4. Characteristic extraction and matching algorithms of ballistic missile in near-space by hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi

    2014-10-01

    The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.

  5. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  6. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  7. Coding and transmission of subband coded images on the Internet

    NASA Astrophysics Data System (ADS)

    Wah, Benjamin W.; Su, Xiao

    2001-09-01

    Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.

  8. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  9. A simplified and powerful image processing methods to separate Thai jasmine rice and sticky rice varieties

    NASA Astrophysics Data System (ADS)

    Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya

    2018-03-01

    A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.

  10. Pixel Statistical Analysis of Diabetic vs. Non-diabetic Foot-Sole Spectral Terahertz Reflection Images

    NASA Astrophysics Data System (ADS)

    Hernandez-Cardoso, G. G.; Alfaro-Gomez, M.; Rojas-Landeros, S. C.; Salas-Gutierrez, I.; Castro-Camus, E.

    2018-03-01

    In this article, we present a series of hydration mapping images of the foot soles of diabetic and non-diabetic subjects measured by terahertz reflectance. In addition to the hydration images, we present a series of RYG-color-coded (red yellow green) images where pixels are assigned one of the three colors in order to easily identify areas in risk of ulceration. We also present the statistics of the number of pixels with each color as a potential quantitative indicator for diabetic foot-syndrome deterioration.

  11. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  12. Ink-constrained halftoning with application to QR codes

    NASA Astrophysics Data System (ADS)

    Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary

    2014-01-01

    This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.

  13. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  14. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.

  15. LAS - LAND ANALYSIS SYSTEM, VERSION 5.0

    NASA Technical Reports Server (NTRS)

    Pease, P. B.

    1994-01-01

    The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.

  16. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs

    PubMed Central

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2017-01-01

    Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches. PMID:28771497

  17. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  18. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  19. Image authentication using distributed source coding.

    PubMed

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  20. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  1. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.

    PubMed

    Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-01-19

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  2. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  3. Integrated Idl Tool For 3d Modeling And Imaging Data Analysis

    NASA Astrophysics Data System (ADS)

    Nita, Gelu M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A. A.; Kontar, E. P.

    2012-05-01

    Addressing many key problems in solar physics requires detailed analysis of non-simultaneous imaging data obtained in various wavelength domains with different spatial resolution and their comparison with each other supplied by advanced 3D physical models. To facilitate achieving this goal, we have undertaken a major enhancement and improvements of IDL-based simulation tools developed earlier for modeling microwave and X-ray emission. The greatly enhanced object-based architecture provides interactive graphic user interface that allows the user i) to import photospheric magnetic field maps and perform magnetic field extrapolations to almost instantly generate 3D magnetic field models, ii) to investigate the magnetic topology of these models by interactively creating magnetic field lines and associated magnetic field tubes, iii) to populate them with user-defined nonuniform thermal plasma and anisotropic nonuniform nonthermal electron distributions; and iv) to calculate the spatial and spectral properties of radio and X-ray emission. The application integrates DLL and Shared Libraries containing fast gyrosynchrotron emission codes developed in FORTRAN and C++, soft and hard X-ray codes developed in IDL, and a potential field extrapolation DLL produced based on original FORTRAN code developed by V. Abramenko and V. Yurchishin. The interactive interface allows users to add any user-defined IDL or external callable radiation code, as well as user-defined magnetic field extrapolation routines. To illustrate the tool capabilities, we present a step-by-step live computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data produced by NORH and RHESSI instruments. This work was supported in part by NSF grants AGS-0961867, AST-0908344, AGS-0969761, and NASA grants NNX10AF27G and NNX11AB49G to New Jersey Institute of Technology, by a UK STFC rolling grant, the Leverhulme Trust, UK, and by the European Commission through the Radiosun and HESPE Networks.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Cyrus; Larsen, Matt; Brugger, Eric

    Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.

  5. Rapid 3D bioprinting from medical images: an application to bone scaffolding

    NASA Astrophysics Data System (ADS)

    Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.

    2018-03-01

    Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.

  6. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    PubMed

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na + or Ca ++ within a single cell, as well as to the analysis of spreading activity (e.g., Ca ++ waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  7. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    NASA Astrophysics Data System (ADS)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.

  8. Survey of adaptive image coding techniques

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1977-01-01

    The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.

  9. Eddy current-nulled convex optimized diffusion encoding (EN-CODE) for distortion-free diffusion tensor imaging with short echo times.

    PubMed

    Aliotta, Eric; Moulin, Kévin; Ennis, Daniel B

    2018-02-01

    To design and evaluate eddy current-nulled convex optimized diffusion encoding (EN-CODE) gradient waveforms for efficient diffusion tensor imaging (DTI) that is free of eddy current-induced image distortions. The EN-CODE framework was used to generate diffusion-encoding waveforms that are eddy current-compensated. The EN-CODE DTI waveform was compared with the existing eddy current-nulled twice refocused spin echo (TRSE) sequence as well as monopolar (MONO) and non-eddy current-compensated CODE in terms of echo time (TE) and image distortions. Comparisons were made in simulations, phantom experiments, and neuro imaging in 10 healthy volunteers. The EN-CODE sequence achieved eddy current compensation with a significantly shorter TE than TRSE (78 versus 96 ms) and a slightly shorter TE than MONO (78 versus 80 ms). Intravoxel signal variance was lower in phantoms with EN-CODE than with MONO (13.6 ± 11.6 versus 37.4 ± 25.8) and not different from TRSE (15.1 ± 11.6), indicating good robustness to eddy current-induced image distortions. Mean fractional anisotropy values in brain edges were also significantly lower with EN-CODE than with MONO (0.16 ± 0.01 versus 0.24 ± 0.02, P < 1 x 10 -5 ) and not different from TRSE (0.16 ± 0.01 versus 0.16 ± 0.01, P = nonsignificant). The EN-CODE sequence eliminated eddy current-induced image distortions in DTI with a TE comparable to MONO and substantially shorter than TRSE. Magn Reson Med 79:663-672, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. QR-on-a-chip: a computer-recognizable micro-pattern engraved microfluidic device for high-throughput image acquisition.

    PubMed

    Yun, Kyungwon; Lee, Hyunjae; Bang, Hyunwoo; Jeon, Noo Li

    2016-02-21

    This study proposes a novel way to achieve high-throughput image acquisition based on a computer-recognizable micro-pattern implemented on a microfluidic device. We integrated the QR code, a two-dimensional barcode system, onto the microfluidic device to simplify imaging of multiple ROIs (regions of interest). A standard QR code pattern was modified to arrays of cylindrical structures of polydimethylsiloxane (PDMS). Utilizing the recognition of the micro-pattern, the proposed system enables: (1) device identification, which allows referencing additional information of the device, such as device imaging sequences or the ROIs and (2) composing a coordinate system for an arbitrarily located microfluidic device with respect to the stage. Based on these functionalities, the proposed method performs one-step high-throughput imaging for data acquisition in microfluidic devices without further manual exploration and locating of the desired ROIs. In our experience, the proposed method significantly reduced the time for the preparation of an acquisition. We expect that the method will innovatively improve the prototype device data acquisition and analysis.

  11. Biometrics encryption combining palmprint with two-layer error correction codes

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  12. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  13. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  14. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  15. Automated Spatio-Temporal Analysis of Remotely Sensed Imagery for Water Resources Management

    NASA Astrophysics Data System (ADS)

    Bahr, Thomas

    2016-04-01

    Since 2012, the state of California faces an extreme drought, which impacts water supply in many ways. Advanced remote sensing is an important technology to better assess water resources, monitor drought conditions and water supplies, plan for drought response and mitigation, and measure drought impacts. In the present case study latest time series analysis capabilities are used to examine surface water in reservoirs located along the western flank of the Sierra Nevada region of California. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. A time series from Landsat images (L-5 TM, L-7 ETM+, L-8 OLI) of the AOI was obtained for 1999 to 2015 (October acquisitions). Downloaded from the USGS EarthExplorer web site, they already were georeferenced to a UTM Zone 10N (WGS-84) coordinate system. ENVITasks were used to pre-process the Landsat images as follows: • Triangulation based gap-filling for the SLC-off Landsat-7 ETM+ images. • Spatial subsetting to the same geographic extent. • Radiometric correction to top-of-atmosphere (TOA) reflectance. • Atmospheric correction using QUAC®, which determines atmospheric correction parameters directly from the observed pixel spectra in a scene, without ancillary information. Spatio-temporal analysis was executed with the following tasks: • Creation of Modified Normalized Difference Water Index images (MNDWI, Xu 2006) to enhance open water features while suppressing noise from built-up land, vegetation, and soil. • Threshold based classification of the water index images to extract the water features. • Classification aggregation as a post-classification cleanup process. • Export of the respective water classes to vector layers for further evaluation in a GIS. • Animation of the classification series and export to a common video format. • Plotting the time series of water surface area in square kilometers. The automated spatio-temporal analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the spatio-temporal analysis tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study verify the drastic decrease of the amount of surface water in the AOI, indicative of the major drought that is pervasive throughout California. Accordingly, the time series analysis was correlated successfully with the daily reservoir elevations of the Don Pedro reservoir (station DNP, operated by CDEC).

  16. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  17. Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains

    NASA Astrophysics Data System (ADS)

    Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao

    2017-11-01

    A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.

  18. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  19. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  20. Transmission and storage of medical images with patient information.

    PubMed

    Acharya U, Rajendra; Subbanna Bhat, P; Kumar, Sathish; Min, Lim Choo

    2003-07-01

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The text data is encrypted before interleaving with images to ensure greater security. The graphical signals are interleaved with the image. Two types of error control-coding techniques are proposed to enhance reliability of transmission and storage of medical images interleaved with patient information. Transmission and storage scenarios are simulated with and without error control coding and a qualitative as well as quantitative interpretation of the reliability enhancement resulting from the use of various commonly used error control codes such as repetitive, and (7,4) Hamming code is provided.

  1. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    NASA Astrophysics Data System (ADS)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.

  2. Spotted star light curve numerical modeling technique and its application to HII 1883 surface imaging

    NASA Astrophysics Data System (ADS)

    Kolbin, A. I.; Shimansky, V. V.

    2014-04-01

    We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.

  3. Porcupine: A visual pipeline tool for neuroimaging analysis

    PubMed Central

    Snoek, Lukas; Knapen, Tomas

    2018-01-01

    The field of neuroimaging is rapidly adopting a more reproducible approach to data acquisition and analysis. Data structures and formats are being standardised and data analyses are getting more automated. However, as data analysis becomes more complicated, researchers often have to write longer analysis scripts, spanning different tools across multiple programming languages. This makes it more difficult to share or recreate code, reducing the reproducibility of the analysis. We present a tool, Porcupine, that constructs one’s analysis visually and automatically produces analysis code. The graphical representation improves understanding of the performed analysis, while retaining the flexibility of modifying the produced code manually to custom needs. Not only does Porcupine produce the analysis code, it also creates a shareable environment for running the code in the form of a Docker image. Together, this forms a reproducible way of constructing, visualising and sharing one’s analysis. Currently, Porcupine links to Nipype functionalities, which in turn accesses most standard neuroimaging analysis tools. Our goal is to release researchers from the constraints of specific implementation details, thereby freeing them to think about novel and creative ways to solve a given problem. Porcupine improves the overview researchers have of their processing pipelines, and facilitates both the development and communication of their work. This will reduce the threshold at which less expert users can generate reusable pipelines. With Porcupine, we bridge the gap between a conceptual and an implementational level of analysis and make it easier for researchers to create reproducible and shareable science. We provide a wide range of examples and documentation, as well as installer files for all platforms on our website: https://timvanmourik.github.io/Porcupine. Porcupine is free, open source, and released under the GNU General Public License v3.0. PMID:29746461

  4. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  5. The NOAO NVO Portal

    NASA Astrophysics Data System (ADS)

    Miller, C. J.; Gasson, D.; Fuentes, E.

    2007-10-01

    The NOAO NVO Portal is a web application for one-stop discovery, analysis, and access to VO-compliant imaging data and services. The current release allows for GUI-based discovery of nearly a half million images from archives such as the NOAO Science Archive, the Hubble Space Telescope WFPC2 and ACS instruments, XMM-Newton, Chandra, and ESO's INT Wide-Field Survey, among others. The NOAO Portal allows users to view image metadata, footprint wire-frames, FITS image previews, and provides one-click access to science quality imaging data throughout the entire sky via the Firefox web browser (i.e., no applet or code to download). Users can stage images from multiple archives at the NOAO NVO Portal for quick and easy bulk downloads. The NOAO NVO Portal also provides simplified and direct access to VO analysis services, such as the WESIX catalog generation service. We highlight the features of the NOAO NVO Portal (http://nvo.noao.edu).

  6. Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments

    PubMed Central

    Sachs, Christian Carsten; Grünberger, Alexander; Helfrich, Stefan; Probst, Christopher; Wiechert, Wolfgang; Kohlheyer, Dietrich; Nöh, Katharina

    2016-01-01

    Background Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM) cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool. Results We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB) is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware) that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks. Conclusion Presented is the software molyso, a ready-to-use open source software (BSD-licensed) for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso. PMID:27661996

  7. Regulating Alcohol Advertising: Content Analysis of the Adequacy of Federal and Self-Regulation of Magazine Advertisements, 2008–2010

    PubMed Central

    Cukier, Samantha; Jernigan, David H.

    2014-01-01

    Objectives. We analyzed beer, spirits, and alcopop magazine advertisements to determine adherence to federal and voluntary advertising standards. We assessed the efficacy of these standards in curtailing potentially damaging content and protecting public health. Methods. We obtained data from a content analysis of a census of 1795 unique advertising creatives for beer, spirits, and alcopops placed in nationally available magazines between 2008 and 2010. We coded creatives for manifest content and adherence to federal regulations and industry codes. Results. Advertisements largely adhered to existing regulations and codes. We assessed only 23 ads as noncompliant with federal regulations and 38 with industry codes. Content consistent with the codes was, however, often culturally positive in terms of aspirational depictions. In addition, creatives included degrading and sexualized images, promoted risky behavior, and made health claims associated with low-calorie content. Conclusions. Existing codes and regulations are largely followed regarding content but do not adequately protect against content that promotes unhealthy and irresponsible consumption and degrades potentially vulnerable populations in its depictions. Our findings suggest further limitations and enhanced federal oversight may be necessary to protect public health. PMID:24228667

  8. Choroidal OCT

    NASA Astrophysics Data System (ADS)

    Esmaeelpour, Marieh; Drexler, Wolfgang

    Novel imaging devices, imaging strategies and automated image analysis with optical coherence tomography have improved our understanding of the choroid in health and pathology. Non-invasive in-vivo high resolution choroidal imaging has had its highest impact in the investigation of macular diseases such as diabetes macular edema and age-related macular degeneration. Choroidal thickness may provide a clinically feasible measure of disease stage and treatment success. It will even support disease diagnosis and phenotyping as is demonstrated in this chapter. Utilizing color coded thickness mapping of the choroid and its Sattler's and Haller's layer may further strengthen the sensitivity of the investigation findings.

  9. Quantifying Therapeutic and Diagnostic Efficacy in 2D Microvascular Images

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia; Vickerman, Mary B.; Keith, Patricia A.

    2009-01-01

    VESGEN is a newly automated, user-interactive program that maps and quantifies the effects of vascular therapeutics and regulators on microvascular form and function. VESGEN analyzes two-dimensional, black and white vascular images by measuring important vessel morphology parameters. This software guides the user through each required step of the analysis process via a concise graphical user interface (GUI). Primary applications of the VESGEN code are 2D vascular images acquired as clinical diagnostic images of the human retina and as experimental studies of the effects of vascular regulators and therapeutics on vessel remodeling.

  10. Combined chirp coded tissue harmonic and fundamental ultrasound imaging for intravascular ultrasound: 20–60 MHz phantom and ex vivo results

    PubMed Central

    Park, Jinhyoung; Li, Xiang; Zhou, Qifa; Shung, K. Kirk

    2013-01-01

    The application of chirp coded excitation to pulse inversion tissue harmonic imaging can increase signal to noise ratio. On the other hand, the elevation of range side lobe level, caused by leakages of the fundamental signal, has been problematic in mechanical scanners which are still the most prevalent in high frequency intravascular ultrasound imaging. Fundamental chirp coded excitation imaging can achieve range side lobe levels lower than –60 dB with Hanning window, but it yields higher side lobes level than pulse inversion chirp coded tissue harmonic imaging (PI-CTHI). Therefore, in this paper a combined pulse inversion chirp coded tissue harmonic and fundamental imaging mode (CPI-CTHI) is proposed to retain the advantages of both chirp coded harmonic and fundamental imaging modes by demonstrating 20–60 MHz phantom and ex vivo results. A simulation study shows that the range side lobe level of CPI-CTHI is 16 dB lower than PI-CTHI, assuming that the transducer translates incident positions by 50 μm when two beamlines of pulse inversion pair are acquired. CPI-CTHI is implemented for a proto-typed intravascular ultrasound scanner capable of combined data acquisition in real-time. A wire phantom study shows that CPI-CTHI has a 12 dB lower range side lobe level and a 7 dB higher echo signal to noise ratio than PI-CTHI, while the lateral resolution and side lobe level are 50 μm finer and –3 dB less than fundamental chirp coded excitation imaging respectively. Ex vivo scanning of a rabbit trachea demonstrates that CPI-CTHI is capable of visualizing blood vessels as small as 200 μm in diameter with 6 dB better tissue contrast than either PI-CTHI or fundamental chirp coded excitation imaging. These results clearly indicate that CPI-CTHI may enhance tissue contrast with less range side lobe level than PI-CTHI. PMID:22871273

  11. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  12. Interconnecting smartphone, image analysis server, and case report forms in clinical trials for automatic skin lesion tracking in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.

    2016-03-01

    Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.

  13. 110 °C range athermalization of wavefront coding infrared imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong

    2017-09-01

    110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.

  14. Integrated software environment based on COMKAT for analyzing tracer pharmacokinetics with molecular imaging.

    PubMed

    Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F

    2010-01-01

    An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.

  15. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  16. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  17. Identification Code of Interstellar Cloud within IRAF

    NASA Astrophysics Data System (ADS)

    Lee, Youngung; Jung, Jae Hoon; Kim, Hyun-Goo

    1997-12-01

    We present a code which identifies individual clouds in crowded region using IMFORT interface within Image Reduction and Analysis Facility(IRAF). We define a cloud as an object composed of all pixels in longitude, latitude, and velocity that are simply connected and that lie above some threshold temperature. The code searches the whole pixels of the data cube in efficient way to isolate individual clouds. Along with identification of clouds it is designed to estimate their mean values of longitudes, latitudes, and velocities. In addition, a function of generating individual images(or cube data) of identified clouds is added up. We also present identified individual clouds using a 12CO survey data cube of Galactic Anticenter Region(Lee et al. 1997) as a test example. We used a threshold temperature of 5 sigma rms noise level of the data. With a higher threshold temperature, we isolated subclouds of a huge cloud identified originally. As the most important parameter to identify clouds is the threshold value, its effect to the size and velocity dispersion is discussed rigorously.

  18. Decomposition of the optical transfer function: wavefront coding imaging systems

    NASA Astrophysics Data System (ADS)

    Muyo, Gonzalo; Harvey, Andy R.

    2005-10-01

    We describe the mapping of the optical transfer function (OTF) of an incoherent imaging system into a geometrical representation. We show that for defocused traditional and wavefront-coded systems the OTF can be represented as a generalized Cornu spiral. This representation provides a physical insight into the way in which wavefront coding can increase the depth of field of an imaging system and permits analytical quantification of salient OTF parameters, such as the depth of focus, the location of nulls, and amplitude and phase modulation of the wavefront-coding OTF.

  19. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  20. Image management research

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1988-01-01

    Two types of research issues are involved in image management systems with space station applications: image processing research and image perception research. The image processing issues are the traditional ones of digitizing, coding, compressing, storing, analyzing, and displaying, but with a new emphasis on the constraints imposed by the human perceiver. Two image coding algorithms have been developed that may increase the efficiency of image management systems (IMS). Image perception research involves a study of the theoretical and practical aspects of visual perception of electronically displayed images. Issues include how rapidly a user can search through a library of images, how to make this search more efficient, and how to present images in terms of resolution and split screens. Other issues include optimal interface to an IMS and how to code images in a way that is optimal for the human perceiver. A test-bed within which such issues can be addressed has been designed.

  1. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  2. Use of fluorescent proteins and color-coded imaging to visualize cancer cells with different genetic properties.

    PubMed

    Hoffman, Robert M

    2016-03-01

    Fluorescent proteins are very bright and available in spectrally-distinct colors, enable the imaging of color-coded cancer cells growing in vivo and therefore the distinction of cancer cells with different genetic properties. Non-invasive and intravital imaging of cancer cells with fluorescent proteins allows the visualization of distinct genetic variants of cancer cells down to the cellular level in vivo. Cancer cells with increased or decreased ability to metastasize can be distinguished in vivo. Gene exchange in vivo which enables low metastatic cancer cells to convert to high metastatic can be color-coded imaged in vivo. Cancer stem-like and non-stem cells can be distinguished in vivo by color-coded imaging. These properties also demonstrate the vast superiority of imaging cancer cells in vivo with fluorescent proteins over photon counting of luciferase-labeled cancer cells.

  3. The design of the CMOS wireless bar code scanner applying optical system based on ZigBee

    NASA Astrophysics Data System (ADS)

    Chen, Yuelin; Peng, Jian

    2008-03-01

    The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.

  4. ImageX: new and improved image explorer for astronomical images and beyond

    NASA Astrophysics Data System (ADS)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan images, and its featureset to include basic functions like image overlay and colormaps. Users needing more advanced visualization and analysis capabilities could use a desktop tool like DS9+IRAF on another IU Trident project called StarDock, without having to download Gigabytes of FITS image data.

  5. Imaging Performance Analysis of Simbol-X with Simulations

    NASA Astrophysics Data System (ADS)

    Chauvin, M.; Roques, J. P.

    2009-05-01

    Simbol-X is an X-Ray telescope operating in formation flight. It means that its optical performances will strongly depend on the drift of the two spacecrafts and its ability to measure these drifts for image reconstruction. We built a dynamical ray tracing code to study the impact of these parameters on the optical performance of Simbol-X (see Chauvin et al., these proceedings). Using the simulation tool we have developed, we have conducted detailed analyses of the impact of different parameters on the imaging performance of the Simbol-X telescope.

  6. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  7. Adaptive partially hidden Markov models with application to bilevel image coding.

    PubMed

    Forchhammer, S; Rasmussen, T S

    1999-01-01

    Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.

  8. Ultrasound Elasticity Imaging System with Chirp-Coded Excitation for Assessing Biomechanical Properties of Elasticity Phantom

    PubMed Central

    Chun, Guan-Chun; Chiang, Hsing-Jung; Lin, Kuan-Hung; Li, Chien-Ming; Chen, Pei-Jarn; Chen, Tainsong

    2015-01-01

    The biomechanical properties of soft tissues vary with pathological phenomenon. Ultrasound elasticity imaging is a noninvasive method used to analyze the local biomechanical properties of soft tissues in clinical diagnosis. However, the echo signal-to-noise ratio (eSNR) is diminished because of the attenuation of ultrasonic energy by soft tissues. Therefore, to improve the quality of elastography, the eSNR and depth of ultrasound penetration must be increased using chirp-coded excitation. Moreover, the low axial resolution of ultrasound images generated by a chirp-coded pulse must be increased using an appropriate compression filter. The main aim of this study is to develop an ultrasound elasticity imaging system with chirp-coded excitation using a Tukey window for assessing the biomechanical properties of soft tissues. In this study, we propose an ultrasound elasticity imaging system equipped with a 7.5-MHz single-element transducer and polymethylpentene compression plate to measure strains in soft tissues. Soft tissue strains were analyzed using cross correlation (CC) and absolution difference (AD) algorithms. The optimal parameters of CC and AD algorithms used for the ultrasound elasticity imaging system with chirp-coded excitation were determined by measuring the elastographic signal-to-noise ratio (SNRe) of a homogeneous phantom. Moreover, chirp-coded excitation and short pulse excitation were used to measure the elasticity properties of the phantom. The elastographic qualities of the tissue-mimicking phantom were assessed in terms of Young’s modulus and elastographic contrast-to-noise ratio (CNRe). The results show that the developed ultrasound elasticity imaging system with chirp-coded excitation modulated by a Tukey window can acquire accurate, high-quality elastography images. PMID:28793718

  9. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  10. SimpleITK Image-Analysis Notebooks: a Collaborative Environment for Education and Reproducible Research.

    PubMed

    Yaniv, Ziv; Lowekamp, Bradley C; Johnson, Hans J; Beare, Richard

    2018-06-01

    Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .

  11. Code-modulated interferometric imaging system using phased arrays

    NASA Astrophysics Data System (ADS)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  12. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  14. Optical encryption of digital data in form of quick response code using spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2016-11-01

    Applications of optical methods for encryption purposes have been attracting interest of researchers for decades. The most popular are coherent techniques such as double random phase encoding. Its main advantage is high security due to transformation of spectrum of image to be encrypted into white spectrum via use of first phase random mask which allows for encrypted images with white spectra. Downsides are necessity of using holographic registration scheme and speckle noise occurring due to coherent illumination. Elimination of these disadvantages is possible via usage of incoherent illumination. In this case, phase registration no longer matters, which means that there is no need for holographic setup, and speckle noise is gone. Recently, encryption of digital information in form of binary images has become quite popular. Advantages of using quick response (QR) code in capacity of data container for optical encryption include: 1) any data represented as QR code will have close to white (excluding zero spatial frequency) Fourier spectrum which have good overlapping with encryption key spectrum; 2) built-in algorithm for image scale and orientation correction which simplifies decoding of decrypted QR codes; 3) embedded error correction code allows for successful decryption of information even in case of partial corruption of decrypted image. Optical encryption of digital data in form QR codes using spatially incoherent illumination was experimentally implemented. Two liquid crystal spatial light modulators were used in experimental setup for QR code and encrypting kinoform imaging respectively. Decryption was conducted digitally. Successful decryption of encrypted QR codes is demonstrated.

  15. Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 2: Appendix

    NASA Technical Reports Server (NTRS)

    Wiley, C. A.; Chang, M. U.

    1981-01-01

    A number of topics supporting the systems analysis of a multifrequency aperture-synthesizing microwave radiometer system are discussed. Fellgett's (multiple) advantage, interferometer mapping behavior, mapping geometry, image processing programs, and sampling errors are among the topics discussed. A FORTRAN program code is given.

  16. Design of the superconducting magnet for 9.4 Tesla whole-body magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Li, Y.; Wang, Q.; Dai, Y.; Ni, Z.; Zhu, X.; Li, L.; Zhao, B.; Chen, S.

    2017-02-01

    A superconducting magnet for 9.4 Tesla whole-body magnetic resonance imaging is designed and fabricated in Institute of Electrical Engineering, Chinese Academy of Sciences. In this paper, the electromagnetic design methods of the main coils and compensating coils are presented. Sensitivity analysis is performed for all superconducting coils. The design of the superconducting shimming coils is also presented and the design of electromagnetic decoupling of the Z2 coils from the main coils is introduced. Stress and strain analysis with both averaged and detailed models is performed with finite element method. A quench simulation code with anisotropic continuum model and control volume method is developed by us and is verified by experimental study. By means of the quench simulation code, the quench protection system for the 9.4 T magnet is designed for the main coils, the compensating coils and the shimming coils. The magnet cryostat design with zero helium boiling-off technology is also introduced.

  17. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  18. Measuring implosion velocities in experiments and simulations of laser-driven cylindrical implosions on the OMEGA laser

    NASA Astrophysics Data System (ADS)

    Hansen, E. C.; Barnak, D. H.; Betti, R.; Campbell, E. M.; Chang, P.-Y.; Davies, J. R.; Glebov, V. Yu; Knauer, J. P.; Peebles, J.; Regan, S. P.; Sefkow, A. B.

    2018-05-01

    Laser-driven magnetized liner inertial fusion (MagLIF) on OMEGA involves cylindrical implosions, a preheat beam, and an applied magnetic field. Initial experiments excluded the preheat beam and magnetic field to better characterize the implosion. X-ray self-emission as measured by framing cameras was used to determine the shell trajectory. The 1D code LILAC was used to model the central region of the implosion, and results were compared to 2D simulations from the HYDRA code. Post-processing of simulation output with SPECT3D and Yorick produced synthetic x-ray images that were used to compare the simulation results with the x-ray framing camera data. Quantitative analysis shows that higher measured neutron yields correlate with higher implosion velocities. The future goal is to further analyze the x-ray images to characterize the uniformity of the implosions and apply these analysis techniques to integrated laser-driven MagLIF shots to better understand the effects of preheat and the magnetic field.

  19. A Comparative Study on Diagnostic Accuracy of Colour Coded Digital Images, Direct Digital Images and Conventional Radiographs for Periapical Lesions – An In Vitro Study

    PubMed Central

    Mubeen; K.R., Vijayalakshmi; Bhuyan, Sanat Kumar; Panigrahi, Rajat G; Priyadarshini, Smita R; Misra, Satyaranjan; Singh, Chandravir

    2014-01-01

    Objectives: The identification and radiographic interpretation of periapical bone lesions is important for accurate diagnosis and treatment. The present study was undertaken to study the feasibility and diagnostic accuracy of colour coded digital radiographs in terms of presence and size of lesion and to compare the diagnostic accuracy of colour coded digital images with direct digital images and conventional radiographs for assessing periapical lesions. Materials and Methods: Sixty human dry cadaver hemimandibles were obtained and periapical lesions were created in first and second premolar teeth at the junction of cancellous and cortical bone using a micromotor handpiece and carbide burs of sizes 2, 4 and 6. After each successive use of round burs, a conventional, RVG and colour coded image was taken for each specimen. All the images were evaluated by three observers. The diagnostic accuracy for each bur and image mode was calculated statistically. Results: Our results showed good interobserver (kappa > 0.61) agreement for the different radiographic techniques and for the different bur sizes. Conventional Radiography outperformed Digital Radiography in diagnosing periapical lesions made with Size two bur. Both were equally diagnostic for lesions made with larger bur sizes. Colour coding method was least accurate among all the techniques. Conclusion: Conventional radiography traditionally forms the backbone in the diagnosis, treatment planning and follow-up of periapical lesions. Direct digital imaging is an efficient technique, in diagnostic sense. Colour coding of digital radiography was feasible but less accurate however, this imaging technique, like any other, needs to be studied continuously with the emphasis on safety of patients and diagnostic quality of images. PMID:25584318

  20. Operational rate-distortion performance for joint source and channel coding of images.

    PubMed

    Ruf, M J; Modestino, J W

    1999-01-01

    This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.

  1. Nucleotide sequence determination of guinea-pig casein B mRNA reveals homology with bovine and rat alpha s1 caseins and conservation of the non-coding regions of the mRNA.

    PubMed Central

    Hall, L; Laird, J E; Craig, R K

    1984-01-01

    Nucleotide sequence analysis of cloned guinea-pig casein B cDNA sequences has identified two casein B variants related to the bovine and rat alpha s1 caseins. Amino acid homology was largely confined to the known bovine or predicted rat phosphorylation sites and within the 'signal' precursor sequence. Comparison of the deduced nucleotide sequence of the guinea-pig and rat alpha s1 casein mRNA species showed greater sequence conservation in the non-coding than in the coding regions, suggesting a functional and possibly regulatory role for the non-coding regions of casein mRNA. The results provide insight into the evolution of the casein genes, and raise questions as to the role of conserved nucleotide sequences within the non-coding regions of mRNA species. Images Fig. 1. PMID:6548375

  2. Measurement of myocardial perfusion and infarction size using computer-aided diagnosis system for myocardial contrast echocardiography.

    PubMed

    Du, Guo-Qing; Xue, Jing-Yi; Guo, Yanhui; Chen, Shuang; Du, Pei; Wu, Yan; Wang, Yu-Hang; Zong, Li-Qiu; Tian, Jia-Wei

    2015-09-01

    Proper evaluation of myocardial microvascular perfusion and assessment of infarct size is critical for clinicians. We have developed a novel computer-aided diagnosis (CAD) approach for myocardial contrast echocardiography (MCE) to measure myocardial perfusion and infarct size. Rabbits underwent 15 min of coronary occlusion followed by reperfusion (group I, n = 15) or 60 min of coronary occlusion followed by reperfusion (group II, n = 15). Myocardial contrast echocardiography was performed before and 7 d after ischemia/reperfusion, and images were analyzed with the CAD system on the basis of eliminating particle swarm optimization clustering analysis. The myocardium was quickly and accurately detected using contrast-enhanced images, myocardial perfusion was quantitatively calibrated and a color-coded map calibrated by contrast intensity and automatically produced by the CAD system was used to outline the infarction region. Calibrated contrast intensity was significantly lower in infarct regions than in non-infarct regions, allowing differentiation of abnormal and normal myocardial perfusion. Receiver operating characteristic curve analysis documented that -54-pixel contrast intensity was an optimal cutoff point for the identification of infarcted myocardium with a sensitivity of 95.45% and specificity of 87.50%. Infarct sizes obtained using myocardial perfusion defect analysis of original contrast images and the contrast intensity-based color-coded map in computerized images were compared with infarct sizes measured using triphenyltetrazolium chloride staining. Use of the proposed CAD approach provided observers with more information. The infarct sizes obtained with myocardial perfusion defect analysis, the contrast intensity-based color-coded map and triphenyltetrazolium chloride staining were 23.72 ± 8.41%, 21.77 ± 7.8% and 18.21 ± 4.40% (% left ventricle) respectively (p > 0.05), indicating that computerized myocardial contrast echocardiography can accurately measure infarct size. On the basis of the results, we believe the CAD method can quickly and automatically measure myocardial perfusion and infarct size and will, it is hoped, be very helpful in clinical therapeutics. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  3. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  4. A denoising algorithm for CT image using low-rank sparse coding

    NASA Astrophysics Data System (ADS)

    Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng

    2018-03-01

    We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.

  5. Photon Throughput Calculations for a Spherical Crystal Spectrometer

    NASA Astrophysics Data System (ADS)

    Gilman, C. J.; Bitter, M.; Delgado-Aparicio, L.; Efthimion, P. C.; Hill, K.; Kraus, B.; Gao, L.; Pablant, N.

    2017-10-01

    X-ray imaging crystal spectrometers of the type described in Refs. have become a standard diagnostic for Doppler measurements of profiles of the ion temperature and the plasma flow velocities in magnetically confined, hot fusion plasmas. These instruments have by now been implemented on major tokamak and stellarator experiments in Korea, China, Japan, and Germany and are currently also being designed by PPPL for ITER. A still missing part in the present data analysis is an efficient code for photon throughput calculations to evaluate the chord-integrated spectral data. The existing ray tracing codes cannot be used for a data analysis between shots, since they require extensive and time consuming numerical calculations. Here, we present a detailed analysis of the geometrical properties of the ray pattern. This method allows us to minimize the extent of numerical calculations and to create a more efficient code. This work was performed under the auspices of the U.S. Department of Energy by Princeton Plasma Physics Laboratory under contract DE-AC02-09CH11466.

  6. Super-Resolution Reconstruction of Remote Sensing Images Using Multifractal Analysis

    PubMed Central

    Hu, Mao-Gui; Wang, Jin-Feng; Ge, Yong

    2009-01-01

    Satellite remote sensing (RS) is an important contributor to Earth observation, providing various kinds of imagery every day, but low spatial resolution remains a critical bottleneck in a lot of applications, restricting higher spatial resolution analysis (e.g., intra-urban). In this study, a multifractal-based super-resolution reconstruction method is proposed to alleviate this problem. The multifractal characteristic is common in Nature. The self-similarity or self-affinity presented in the image is useful to estimate details at larger and smaller scales than the original. We first look for the presence of multifractal characteristics in the images. Then we estimate parameters of the information transfer function and noise of the low resolution image. Finally, a noise-free, spatial resolution-enhanced image is generated by a fractal coding-based denoising and downscaling method. The empirical case shows that the reconstructed super-resolution image performs well in detail enhancement. This method is not only useful for remote sensing in investigating Earth, but also for other images with multifractal characteristics. PMID:22291530

  7. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  8. Images multiplexing by code division technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung J.; Rigas, Harriett

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the two-dimensional image is treated as a long one-dimensional vector and the m-sequence is employed to obtain the results. Secondly, the two-dimensional quasi m-array is proposed and used for the code division multiplexing. It is shown that quasi m-array is faster when the image size is 256 x 256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  9. Images Multiplexing By Code Division Technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung Jung; Rigas, Harriett B.

    1990-01-01

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the 2-D image is treated as a long 1-D vector and the m-sequence is employed to obtained the results. Secondly, the 2-D quasi m-array is proposed and used for the code division multiplexing. It is showed that quasi m-array is faster when the image size is 256x256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  10. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

  11. Skinny Is Not Enough: A Content Analysis of Fitspiration on Pinterest.

    PubMed

    Simpson, Courtney C; Mazzeo, Suzanne E

    2017-05-01

    Fitspiration is a relatively new social media trend nominally intended to promote health and fitness. Fitspiration messages are presented as encouraging; however, they might also engender body dissatisfaction and compulsive exercise. This study analyzed fitspiration content (n = 1050) on the image-based social media platform Pinterest. Independent raters coded the images and text present in the posts. Messages were categorized as appearance- or health-related, and coded for Social Cognitive Theory constructs: standards, behaviors, and outcome expectancies. Messages encouraged appearance-related body image standards and weight management behaviors more frequently than health-related standards and behaviors, and emphasized attractiveness as motivation to partake in such behaviors. Results also indicated that fitspiration messages include a comparable amount of fit praise (i.e., emphasis on toned/defined muscles) and thin praise (i.e., emphasis on slenderness), suggesting that women are not only supposed to be thin but also fit. Considering the negative outcomes associated with both exposure to idealized body images and exercising for appearance reasons, findings suggest that fitspiration messages are problematic, especially for viewers with high risk of eating disorders and related issues.

  12. Accessible and informative sectioned images, color-coded images, and surface models of the ear.

    PubMed

    Park, Hyo Seok; Chung, Min Suk; Shin, Dong Sun; Jung, Yong Wook; Park, Jin Seo

    2013-08-01

    In our previous research, we created state-of-the-art sectioned images, color-coded images, and surface models of the human ear. Our ear data would be more beneficial and informative if they were more easily accessible. Therefore, the purpose of this study was to distribute the browsing software and the PDF file in which ear images are to be readily obtainable and freely explored. Another goal was to inform other researchers of our methods for establishing the browsing software and the PDF file. To achieve this, sectioned images and color-coded images of ear were prepared (voxel size 0.1 mm). In the color-coded images, structures related to hearing, equilibrium, and structures originated from the first and second pharyngeal arches were segmented supplementarily. The sectioned and color-coded images of right ear were added to the browsing software, which displayed the images serially along with structure names. The surface models were reconstructed to be combined into the PDF file where they could be freely manipulated. Using the browsing software and PDF file, sectional and three-dimensional shapes of ear structures could be comprehended in detail. Furthermore, using the PDF file, clinical knowledge could be identified through virtual otoscopy. Therefore, the presented educational tools will be helpful to medical students and otologists by improving their knowledge of ear anatomy. The browsing software and PDF file can be downloaded without charge and registration at our homepage (http://anatomy.dongguk.ac.kr/ear/). Copyright © 2013 Wiley Periodicals, Inc.

  13. Coded-Aperture X- or gamma -ray telescope with Least- squares image reconstruction. III. Data acquisition and analysis enhancements

    NASA Astrophysics Data System (ADS)

    Kohman, T. P.

    1995-05-01

    The design of a cosmic X- or gamma -ray telescope with least- squares image reconstruction and its simulated operation have been described (Rev. Sci. Instrum. 60, 3396 and 3410 (1989)). Use of an auxiliary open aperture ("limiter") ahead of the coded aperture limits the object field to fewer pixels than detector elements, permitting least-squares reconstruction with improved accuracy in the imaged field; it also yields a uniformly sensitive ("flat") central field. The design has been enhanced to provide for mask-antimask operation. This cancels and eliminates uncertainties in the detector background, and the simulated results have virtually the same statistical accuracy (pixel-by-pixel output-input RMSD) as with a single mask alone. The simulations have been made more realistic by incorporating instrumental blurring of sources. A second-stage least-squares procedure had been developed to determine the precise positions and total fluxes of point sources responsible for clusters of above-background pixels in the field resulting from the first-stage reconstruction. Another program converts source positions in the image plane to celestial coordinates and vice versa, the image being a gnomic projection of a region of the sky.

  14. Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition

    NASA Astrophysics Data System (ADS)

    Buciu, Ioan; Pitas, Ioannis

    Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.

  15. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  16. Ensemble coding of face identity is present but weaker in congenital prosopagnosia.

    PubMed

    Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F

    2018-03-01

    Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.

  18. Political leaders and the media. Can we measure political leadership images in newspapers using computer-assisted content analysis?

    PubMed

    Aaldering, Loes; Vliegenthart, Rens

    Despite the large amount of research into both media coverage of politics as well as political leadership, surprisingly little research has been devoted to the ways political leaders are discussed in the media. This paper studies whether computer-aided content analysis can be applied in examining political leadership images in Dutch newspaper articles. It, firstly, provides a conceptualization of political leader character traits that integrates different perspectives in the literature. Moreover, this paper measures twelve political leadership images in media coverage, based on a large-scale computer-assisted content analysis of Dutch media coverage (including almost 150.000 newspaper articles), and systematically tests the quality of the employed measurement instrument by assessing the relationship between the images, the variance in the measurement, the over-time development of images for two party leaders and by comparing the computer results with manual coding. We conclude that the computerized content analysis provides a valid measurement for the leadership images in Dutch newspapers. Moreover, we find that the dimensions political craftsmanship, vigorousness, integrity, communicative performances and consistency are regularly applied in discussing party leaders, but that portrayal of party leaders in terms of responsiveness is almost completely absent in Dutch newspapers.

  19. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  20. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging

    PubMed Central

    Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.

    2014-01-01

    Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516

  1. TLM-Tracker: software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies.

    PubMed

    Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter

    2012-09-01

    Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.

  2. Modeling IrisCode and its variants as convex polyhedral cones and its security implications.

    PubMed

    Kong, Adams Wai-Kin

    2013-03-01

    IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.

  3. Weighted bi-prediction for light field image coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2017-09-01

    Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.

  4. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-01-01

    Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  5. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.

  6. Information theoretical assessment of digital imaging systems

    NASA Technical Reports Server (NTRS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-01-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  7. Information theoretical assessment of digital imaging systems

    NASA Astrophysics Data System (ADS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-10-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  8. Optical System Design for Noncontact, Normal Incidence, THz Imaging of in vivo Human Cornea.

    PubMed

    Sung, Shijun; Dabironezare, Shahab; Llombart, Nuria; Selvin, Skyler; Bajwa, Neha; Chantra, Somporn; Nowroozi, Bryan; Garritano, James; Goell, Jacob; Li, Alex; Deng, Sophie X; Brown, Elliott; Grundfest, Warren S; Taylor, Zachary D

    2018-01-01

    Reflection mode Terahertz (THz) imaging of corneal tissue water content (CTWC) is a proposed method for early, accurate detection and study of corneal diseases. Despite promising results from ex vivo and in vivo cornea studies, interpretation of the reflectivity data is confounded by the contact between corneal tissue and dielectric windows used to flatten the imaging field. Herein, we present an optical design for non-contact THz imaging of cornea. A beam scanning methodology performs angular, normal incidence sweeps of a focused beam over the corneal surface while keeping the source, detector, and patient stationary. A quasioptical analysis method is developed to analyze the theoretical resolution and imaging field intensity profile. These results are compared to the electric field distribution computed with a physical optics analysis code. Imaging experiments validate the optical theories behind the design and suggest that quasioptical methods are sufficient for designing of THz corneal imaging systems. Successful imaging operations support the feasibility of non-contact in vivo imaging. We believe that this optical system design will enable the first, clinically relevant, in vivo exploration of CTWC using THz technology.

  9. Rock classification based on resistivity patterns in electrical borehole wall images

    NASA Astrophysics Data System (ADS)

    Linek, Margarete; Jungmann, Matthias; Berlage, Thomas; Pechnig, Renate; Clauser, Christoph

    2007-06-01

    Electrical borehole wall images represent grey-level-coded micro-resistivity measurements at the borehole wall. Different scientific methods have been implemented to transform image data into quantitative log curves. We introduce a pattern recognition technique applying texture analysis, which uses second-order statistics based on studying the occurrence of pixel pairs. We calculate so-called Haralick texture features such as contrast, energy, entropy and homogeneity. The supervised classification method is used for assigning characteristic texture features to different rock classes and assessing the discriminative power of these image features. We use classifiers obtained from training intervals to characterize the entire image data set recovered in ODP hole 1203A. This yields a synthetic lithology profile based on computed texture data. We show that Haralick features accurately classify 89.9% of the training intervals. We obtained misclassification for vesicular basaltic rocks. Hence, further image analysis tools are used to improve the classification reliability. We decompose the 2D image signal by the application of wavelet transformation in order to enhance image objects horizontally, diagonally and vertically. The resulting filtered images are used for further texture analysis. This combined classification based on Haralick features and wavelet transformation improved our classification up to a level of 98%. The application of wavelet transformation increases the consistency between standard logging profiles and texture-derived lithology. Texture analysis of borehole wall images offers the potential to facilitate objective analysis of multiple boreholes with the same lithology.

  10. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    NASA Astrophysics Data System (ADS)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  11. Information theoretical assessment of image gathering and coding for digital restoration

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Reichenbach, Stephen E.

    1990-01-01

    The process of image-gathering, coding, and restoration is presently treated in its entirety rather than as a catenation of isolated tasks, on the basis of the relationship between the spectral information density of a transmitted signal and the restorability of images from the signal. This 'information-theoretic' assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system's design and radiance-field statistics, as well as for the information efficiency and data compression that are obtainable through the combination of image gathering with coding to reduce signal redundancy. It is found that high information efficiency is achievable only through minimization of image-gathering degradation as well as signal redundancy.

  12. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  13. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  14. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  15. Toward uniform implementation of parametric map Digital Imaging and Communication in Medicine standard in multisite quantitative diffusion imaging studies.

    PubMed

    Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C

    2018-01-01

    This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.

  16. Determination of Three-Dimensional Left Ventricle Motion to Analyze Ventricular Dyssyncrony in SPECT Images

    NASA Astrophysics Data System (ADS)

    Rebelo, Marina de Sá; Aarre, Ann Kirstine Hummelgaard; Clemmesen, Karen-Louise; Brandão, Simone Cristina Soares; Giorgi, Maria Clementina; Meneghetti, José Cláudio; Gutierrez, Marco Antonio

    2009-12-01

    A method to compute three-dimension (3D) left ventricle (LV) motion and its color coded visualization scheme for the qualitative analysis in SPECT images is proposed. It is used to investigate some aspects of Cardiac Resynchronization Therapy (CRT). The method was applied to 3D gated-SPECT images sets from normal subjects and patients with severe Idiopathic Heart Failure, before and after CRT. Color coded visualization maps representing the LV regional motion showed significant difference between patients and normal subjects. Moreover, they indicated a difference between the two groups. Numerical results of regional mean values representing the intensity and direction of movement in radial direction are presented. A difference of one order of magnitude in the intensity of the movement on patients in relation to the normal subjects was observed. Quantitative and qualitative parameters gave good indications of potential application of the technique to diagnosis and follow up of patients submitted to CRT.

  17. An analysis of how The Irish Times portrayed Irish nursing during the 1999 strike.

    PubMed

    Clarke, J; O'Neill, C S

    2001-07-01

    The aim of this article is to explore the images of nursing that were presented in the media during the recent industrial action by nurses and midwives in the Republic of Ireland. Although both nurses and midwives took industrial strike action, the strike was referred to as 'the nurses' strike' and both nurses and midwives were generally referred to by the generic term 'nurses'. Data were gathered from the printed news media of The Irish Times over a period of one month--4 October to 4 November 1999--which included the nine days of the strike. Although we limited the source of our data to just one newspaper, the findings do provide an image of how nurses and nursing care are viewed by both health professionals and the public. This image appeared to give a higher value to masculine cultural codes and the performance of technical skills, whereas acts associated with feminine cultural codes of caring were considered of lower value.

  18. Synthetic Microwave Imaging Reflectometry diagnostic using 3D FDTD Simulations

    NASA Astrophysics Data System (ADS)

    Kruger, Scott; Jenkins, Thomas; Smithe, David; King, Jacob; Nimrod Team Team

    2017-10-01

    Microwave Imaging Reflectometry (MIR) has become a standard diagnostic for understanding tokamak edge perturbations, including the edge harmonic oscillations in QH mode operation. These long-wavelength perturbations are larger than the normal turbulent fluctuation levels and thus normal analysis of synthetic signals become more difficult. To investigate, we construct a synthetic MIR diagnostic for exploring density fluctuation amplitudes in the tokamak plasma edge by using the three-dimensional, full-wave FDTD code Vorpal. The source microwave beam for the diagnostic is generated and refelected at the cutoff surface that is distorted by 2D density fluctuations in the edge plasma. Synthetic imaging optics at the detector can be used to understand the fluctuation and background density profiles. We apply the diagnostic to understand the fluctuations in edge plasma density during QH-mode activity in the DIII-D tokamak, as modeled by the NIMROD code. This work was funded under DOE Grant Number DE-FC02-08ER54972.

  19. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  20. Permutation coding technique for image recognition systems.

    PubMed

    Kussul, Ernst M; Baidyk, Tatiana N; Wunsch, Donald C; Makeyev, Oleksandr; Martín, Anabel

    2006-11-01

    A feature extractor and neural classifier for image recognition systems are proposed. The proposed feature extractor is based on the concept of random local descriptors (RLDs). It is followed by the encoder that is based on the permutation coding technique that allows to take into account not only detected features but also the position of each feature on the image and to make the recognition process invariant to small displacements. The combination of RLDs and permutation coding permits us to obtain a sufficiently general description of the image to be recognized. The code generated by the encoder is used as an input data for the neural classifier. Different types of images were used to test the proposed image recognition system. It was tested in the handwritten digit recognition problem, the face recognition problem, and the microobject shape recognition problem. The results of testing are very promising. The error rate for the Modified National Institute of Standards and Technology (MNIST) database is 0.44% and for the Olivetti Research Laboratory (ORL) database it is 0.1%.

  1. The location and recognition of anti-counterfeiting code image with complex background

    NASA Astrophysics Data System (ADS)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  2. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  3. Image gathering and coding for digital restoration: Information efficiency and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar

    1989-01-01

    Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.

  4. Dentists' perspectives on caries-related treatment decisions.

    PubMed

    Gomez, J; Ellwood, R P; Martignon, S; Pretty, I A

    2014-06-01

    To assess the impact of patient risk status on Colombian dentists' caries related treatment decisions for early to intermediate caries lesions (ICDAS code 2 to 4). A web-based questionnaire assessed dentists' views on the management of early/intermediate lesions. The questionnaire included questions on demographic characteristics, five clinical scenarios with randomised levels of caries risk, and two questions on different clinical and radiographic sets of images with different thresholds of caries. Questionnaires were completed by 439 dentists. For the two scenarios describing occlusal lesions ICDAS code 2, dentists chose to provide a preventive option in 63% and 60% of the cases. For the approximal lesion ICDAS code 2, 81% of the dentists chose to restore. The main findings of the binary logistic regression analysis for the clinical scenarios suggest that for the ICDAS code 2 occlusal lesions, the odds of a high caries risk patient having restorations is higher than for a low caries risk patient. For the questions describing different clinical thresholds of caries, most dentists would restore at ICDAS code 2 (55%) and for the question showing different radiographic thresholds images, 65% of dentists would intervene operatively at the inner half of enamel. No significant differences with respect to risk were found for these questions with the logistic regression. The results of this study indicate that Colombian dentists have not yet fully adopted non-invasive treatment for early caries lesions.

  5. Immunochromatographic diagnostic test analysis using Google Glass.

    PubMed

    Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan

    2014-03-25

    We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health.

  6. Immunochromatographic Diagnostic Test Analysis Using Google Glass

    PubMed Central

    2014-01-01

    We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health. PMID:24571349

  7. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Modeling of SOC-700 Hyperspectral Imagery with the CAMEO-SIM Code

    DTIC Science & Technology

    2007-10-26

    Yannick, 2001, “SOC-700 and HS-Analysis 2 User’s Manual”, Surface Optics, San Diego [2] Cohen, Michael F. and Wallace, John R., 1993, “ Radiosity ...and Realistic Image Synthesis”, Academic Press, San Francisco [3] Sillion, Francois X. and Puech, Claude, 1994, “ Radiosity and Global Illumination

  9. Design of wavefront coding optical system with annular aperture

    NASA Astrophysics Data System (ADS)

    Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2016-10-01

    Wavefront coding can extend the depth of field of traditional optical system by inserting a phase mask into the pupil plane. In this paper, the point spread function (PSF) of wavefront coding system with annular aperture are analyzed. Stationary phase method and fast Fourier transform (FFT) method are used to compute the diffraction integral respectively. The OTF invariance is analyzed for the annular aperture with cubic phase mask under different obscuration ratio. With these analysis results, a wavefront coding system using Maksutov-Cassegrain configuration is designed finally. It is an F/8.21 catadioptric system with annular aperture, and its focal length is 821mm. The strength of the cubic phase mask is optimized with user-defined operand in Zemax. The Wiener filtering algorithm is used to restore the images and the numerical simulation proves the validity of the design.

  10. Hospital financing of ischaemic stroke: determinants of funding and usefulness of DRG subcategories based on severity of illness.

    PubMed

    Dewilde, Sarah; Annemans, Lieven; Pincé, Hilde; Thijs, Vincent

    2018-05-11

    Several Western and Arab countries, as well as over 30 States in the US are using the "All-Patient Refined Diagnosis-Related Groups" (APR-DRGs) with four severity-of-illness (SOI) subcategories as a model for hospital funding. The aim of this study is to verify whether this is an adequate model for funding stroke hospital admissions, and to explore which risk factors and complications may influence the amount of funding. A bottom-up analysis of 2496 ischaemic stroke admissions in Belgium compares detailed in-hospital resource use (including length of stay, imaging, lab tests, visits and drugs) per SOI category and calculates total hospitalisation costs. A second analysis examines the relationship between the type and location of the index stroke, medical risk factors, patient characteristics, comorbidities and in-hospital complications on the one hand, and the funding level received by the hospital on the other hand. This dataset included 2513 hospitalisations reporting on 35,195 secondary diagnosis codes, all medically coded with the International Classification of Disease (ICD-9). Total costs per admission increased by SOI (€3710-€16,735), with severe patients costing proportionally more in bed days (86%), and milder patients costing more in medical imaging (24%). In all resource categories (bed days, medications, visits and imaging and laboratory tests), the absolute utilisation rate was higher among severe patients, but also showed more variability. SOI 1-2 was associated with vague, non-specific stroke-related ICD-9 codes as primary diagnosis (71-81% of hospitalisations). 24% hospitalisations had, in addition to the primary diagnosis, other stroke-related codes as secondary diagnoses. Presence of lung infections, intracranial bleeding, severe kidney disease, and do-not-resuscitate status were each associated with extreme SOI (p < 0.0001). APR-DRG with SOI subclassification is a useful funding model as it clusters stroke patients in homogenous groups in terms of resource use. The data on medical care utilisation can be used with unit costs from other countries with similar healthcare set-ups to 1) assess stroke-related hospital funding versus actual costs; 2) inform economic models on stroke prevention and treatment. The data on diagnosis codes can be used to 3) understand which factors influence hospital funding; 4) raise awareness about medical coding practices.

  11. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  12. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  13. Optical noise-free image encryption based on quick response code and high dimension chaotic system in gyrator transform domain

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Xu, Minjie; Tian, Ailing

    2017-04-01

    A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.

  14. [Object Separation from Medical X-Ray Images Based on ICA].

    PubMed

    Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun

    2015-03-01

    X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.

  15. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  16. A new code for the design and analysis of the heliostat field layout for power tower system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Xiudong; Lu, Zhenwu; Yu, Weixing

    2010-04-15

    A new code for the design and analysis of the heliostat field layout for power tower system is developed. In the new code, a new method for the heliostat field layout is proposed based on the edge ray principle of nonimaging optics. The heliostat field boundary is constrained by the tower height, the receiver tilt angle and size and the heliostat efficiency factor which is the product of the annual cosine efficiency and the annual atmospheric transmission efficiency. With the new method, the heliostat can be placed with a higher efficiency and a faster response speed of the design andmore » optimization can be obtained. A new module for the analysis of the aspherical heliostat is created in the new code. A new toroidal heliostat field is designed and analyzed by using the new code. Compared with the spherical heliostat, the solar image radius of the field is reduced by about 30% by using the toroidal heliostat if the mirror shape and the tracking are ideal. In addition, to maximize the utilization of land, suitable crops can be considered to be planted under heliostats. To evaluate the feasibility of the crop growth, a method for calculating the annual distribution of sunshine duration on the land surface is developed as well. (author)« less

  17. Vector quantizer based on brightness maps for image compression with the polynomial transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.

    2002-11-01

    We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.

  18. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  19. Efficient burst image compression using H.265/HEVC

    NASA Astrophysics Data System (ADS)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  20. Nomenclature for congenital and paediatric cardiac disease: historical perspectives and The International Pediatric and Congenital Cardiac Code.

    PubMed

    Franklin, Rodney C G; Jacobs, Jeffrey Phillip; Krogmann, Otto N; Béland, Marie J; Aiello, Vera D; Colan, Steven D; Elliott, Martin J; William Gaynor, J; Kurosawa, Hiromi; Maruszewski, Bohdan; Stellin, Giovanni; Tchervenkov, Christo I; Walters Iii, Henry L; Weinberg, Paul; Anderson, Robert H

    2008-12-01

    Clinicians working in the field of congenital and paediatric cardiology have long felt the need for a common diagnostic and therapeutic nomenclature and coding system with which to classify patients of all ages with congenital and acquired cardiac disease. A cohesive and comprehensive system of nomenclature, suitable for setting a global standard for multicentric analysis of outcomes and stratification of risk, has only recently emerged, namely, The International Paediatric and Congenital Cardiac Code. This review, will give an historical perspective on the development of systems of nomenclature in general, and specifically with respect to the diagnosis and treatment of patients with paediatric and congenital cardiac disease. Finally, current and future efforts to merge such systems into the paperless environment of the electronic health or patient record on a global scale are briefly explored. On October 6, 2000, The International Nomenclature Committee for Pediatric and Congenital Heart Disease was established. In January, 2005, the International Nomenclature Committee was constituted in Canada as The International Society for Nomenclature of Paediatric and Congenital Heart Disease. This International Society now has three working groups. The Nomenclature Working Group developed The International Paediatric and Congenital Cardiac Code and will continue to maintain, expand, update, and preserve this International Code. It will also provide ready access to the International Code for the global paediatric and congenital cardiology and cardiac surgery communities, related disciplines, the healthcare industry, and governmental agencies, both electronically and in published form. The Definitions Working Group will write definitions for the terms in the International Paediatric and Congenital Cardiac Code, building on the previously published definitions from the Nomenclature Working Group. The Archiving Working Group, also known as The Congenital Heart Archiving Research Team, will link images and videos to the International Paediatric and Congenital Cardiac Code. The images and videos will be acquired from cardiac morphologic specimens and imaging modalities such as echocardiography, angiography, computerized axial tomography and magnetic resonance imaging, as well as intraoperative images and videos. Efforts are ongoing to expand the usage of The International Paediatric and Congenital Cardiac Code to other areas of global healthcare. Collaborative efforts are underway involving the leadership of The International Nomenclature Committee for Pediatric and Congenital Heart Disease and the representatives of the steering group responsible for the creation of the 11th revision of the International Classification of Diseases, administered by the World Health Organisation. Similar collaborative efforts are underway involving the leadership of The International Nomenclature Committee for Pediatric and Congenital Heart Disease and the International Health Terminology Standards Development Organisation, who are the owners of the Systematized Nomenclature of Medicine or "SNOMED". The International Paediatric and Congenital Cardiac Code was created by specialists in the field to name and classify paediatric and congenital cardiac disease and its treatment. It is a comprehensive code that can be freely downloaded from the internet (http://www.IPCCC.net) and is already in use worldwide, particularly for international comparisons of outcomes. The goal of this effort is to create strategies for stratification of risk and to improve healthcare for the individual patient. The collaboration with the World Heath Organization, the International Health Terminology Standards Development Organisation, and the healthcare industry, will lead to further enhancement of the International Code, and to its more universal use.

  1. High-resolution imaging gamma-ray spectroscopy with externally segmented germanium detectors

    NASA Technical Reports Server (NTRS)

    Callas, J. L.; Mahoney, W. A.; Varnell, L. S.; Wheaton, W. A.

    1993-01-01

    Externally segmented germanium detectors promise a breakthrough in gamma-ray imaging capabilities while retaining the superb energy resolution of germanium spectrometers. An angular resolution of 0.2 deg becomes practical by combining position-sensitive germanium detectors having a segment thickness of a few millimeters with a one-dimensional coded aperture located about a meter from the detectors. Correspondingly higher angular resolutions are possible with larger separations between the detectors and the coded aperture. Two-dimensional images can be obtained by rotating the instrument. Although the basic concept is similar to optical or X-ray coded-aperture imaging techniques, several complicating effects arise because of the penetrating nature of gamma rays. The complications include partial transmission through the coded aperture elements, Compton scattering in the germanium detectors, and high background count rates. Extensive electron-photon Monte Carlo modeling of a realistic detector/coded-aperture/collimator system has been performed. Results show that these complicating effects can be characterized and accounted for with no significant loss in instrument sensitivity.

  2. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  3. Coded excitation ultrasonic needle tracking: An in vivo study.

    PubMed

    Xia, Wenfeng; Ginsberg, Yuval; West, Simeon J; Nikitichev, Daniil I; Ourselin, Sebastien; David, Anna L; Desjardins, Adrien E

    2016-07-01

    Accurate and efficient guidance of medical devices to procedural targets lies at the heart of interventional procedures. Ultrasound imaging is commonly used for device guidance, but determining the location of the device tip can be challenging. Various methods have been proposed to track medical devices during ultrasound-guided procedures, but widespread clinical adoption has remained elusive. With ultrasonic tracking, the location of a medical device is determined by ultrasonic communication between the ultrasound imaging probe and a transducer integrated into the medical device. The signal-to-noise ratio (SNR) of the transducer data is an important determinant of the depth in tissue at which tracking can be performed. In this paper, the authors present a new generation of ultrasonic tracking in which coded excitation is used to improve the SNR without spatial averaging. A fiber optic hydrophone was integrated into the cannula of a 20 gauge insertion needle. This transducer received transmissions from the ultrasound imaging probe, and the data were processed to obtain a tracking image of the needle tip. Excitation using Barker or Golay codes was performed to improve the SNR, and conventional bipolar excitation was performed for comparison. The performance of the coded excitation ultrasonic tracking system was evaluated in an in vivo ovine model with insertions to the brachial plexus and the uterine cavity. Coded excitation significantly increased the SNRs of the tracking images, as compared with bipolar excitation. During an insertion to the brachial plexus, the SNR was increased by factors of 3.5 for Barker coding and 7.1 for Golay coding. During insertions into the uterine cavity, these factors ranged from 2.9 to 4.2 for Barker coding and 5.4 to 8.5 for Golay coding. The maximum SNR was 670, which was obtained with Golay coding during needle withdrawal from the brachial plexus. Range sidelobe artifacts were observed in tracking images obtained with Barker coded excitation, and they were visually absent with Golay coded excitation. The spatial tracking accuracy was unaffected by coded excitation. Coded excitation is a viable method for improving the SNR in ultrasonic tracking without compromising spatial accuracy. This method provided SNR increases that are consistent with theoretical expectations, even in the presence of physiological motion. With the ultrasonic tracking system in this study, the SNR increases will have direct clinical implications in a broad range of interventional procedures by improving visibility of medical devices at large depths.

  4. A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes.

    PubMed

    van Gennip, Yves; Athavale, Prashant; Gilles, Jérôme; Choksi, Rustum

    2015-09-01

    QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.

  5. Collaborative Research and Development (CR&D) III Task Order 0090: Image Processing Framework: From Acquisition and Analysis to Archival Storage

    DTIC Science & Technology

    2013-05-01

    contract or a PhD di sse rtation typically are a " proo f- of-concept" code base that can onl y read a single set of inputs and are not designed ...AFRL-RX-WP-TR-2013-0210 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) III Task Order 0090: Image Processing Framework: From...public release; distribution unlimited. See additional restrictions described on inside pages. STINFO COPY AIR FORCE RESEARCH LABORATORY

  6. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  7. Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.

    PubMed

    Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John

    2016-01-01

    Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

  8. Schlieren image velocimetry measurements in a rocket engine exhaust plume

    NASA Astrophysics Data System (ADS)

    Morales, Rudy; Peguero, Julio; Hargather, Michael

    2017-11-01

    Schlieren image velocimetry (SIV) measures velocity fields by tracking the motion of naturally-occurring turbulent flow features in a compressible flow. Here the technique is applied to measuring the exhaust velocity profile of a liquid rocket engine. The SIV measurements presented include discussion of visibility of structures, image pre-processing for structure visibility, and ability to process resulting images using commercial particle image velocimetry (PIV) codes. The small-scale liquid bipropellant rocket engine operates on nitrous oxide and ethanol as propellants. Predictions of the exhaust velocity are obtained through NASA CEA calculations and simple compressible flow relationships, which are compared against the measured SIV profiles. Analysis of shear layer turbulence along the exhaust plume edge is also presented.

  9. Improvement of single wavelength-based Thai jasmine rice identification with elliptic Fourier descriptor and neural network analysis

    NASA Astrophysics Data System (ADS)

    Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2012-11-01

    Instead of considering only the amount of fluorescent signal spatially distributed on the image of milled rice grains this paper shows how our single-wavelength spectral-imaging-based Thai jasmine (KDML105) rice identification system can be improved by analyzing the shape and size of the image of each milled rice variety especially during the image threshold operation. The image of each milled rice variety is expressed as chain codes and elliptic Fourier coefficients. After that, a feed-forward back-propagation neural network model is applied, resulting in an improved average FAR of 11.0% and FRR of 19.0% in identifying KDML105 milled rice from the unwanted four milled rice varieties.

  10. Responsive Image Inline Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Ian

    2016-10-20

    RIIF is a contributed module for the Drupal php web application framework (drupal.org). It is written as a helper or sub-module of other code which is part of version 8 "core Drupal" and is intended to extend its functionality. It allows Drupal to resize images uploaded through the user-facing text editor within the Drupal GUI (a.k.a. "inline images") for various browser widths. This resizing is already done foe other images through the parent "Responsive Image" core module. This code extends that functionality to inline images.

  11. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  12. #LancerHealth: Using Twitter and Instagram as a tool in a campus wide health promotion initiative.

    PubMed

    Santarossa, Sara; Woodruff, Sarah J

    2018-02-05

    The present study aimed to explore using popular technology that people already have/use as a health promotion tool, in a campus wide social media health promotion initiative, entitled #LancerHealth . During a two-week period the university community was asked to share photos on Twitter and Instagram of What does being healthy on campus look like to you ?, while tagging the image with #LancerHealth . All publically tagged media was collected using the Netlytic software and analysed. Text analysis (N=234 records, Twitter; N=141 records, Instagram) revealed that the majority of the conversation was positive and focused on health and the university. Social network analysis, based on five network properties, showed a small network with little interaction. Lastly, photo coding analysis (N=71 unique image) indicated that the majority of the shared images were of physical activity (52%) and on campus (80%). Further research into this area is warranted.

  13. New procedures to evaluate visually lossless compression for display systems

    NASA Astrophysics Data System (ADS)

    Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim

    2017-09-01

    Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.

  14. Optical head tracking for functional magnetic resonance imaging using structured light.

    PubMed

    Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D

    2008-07-01

    An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.

  15. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  16. Designing an efficient LT-code with unequal error protection for image transmission

    NASA Astrophysics Data System (ADS)

    S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.

    2015-10-01

    The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pM to optimize the performance of image transmission.

  17. Development of a Coded Aperture X-Ray Backscatter Imager for Explosive Device Detection

    NASA Astrophysics Data System (ADS)

    Faust, Anthony A.; Rothschild, Richard E.; Leblanc, Philippe; McFee, John Elton

    2009-02-01

    Defence R&D Canada has an active research and development program on detection of explosive devices using nuclear methods. One system under development is a coded aperture-based X-ray backscatter imaging detector designed to provide sufficient speed, contrast and spatial resolution to detect antipersonnel landmines and improvised explosive devices. The successful development of a hand-held imaging detector requires, among other things, a light-weight, ruggedized detector with low power requirements, supplying high spatial resolution. The University of California, San Diego-designed HEXIS detector provides a modern, large area, high-temperature CZT imaging surface, robustly packaged in a light-weight housing with sound mechanical properties. Based on the potential for the HEXIS detector to be incorporated as the detection element of a hand-held imaging detector, the authors initiated a collaborative effort to demonstrate the capability of a coded aperture-based X-ray backscatter imaging detector. This paper will discuss the landmine and IED detection problem and review the coded aperture technique. Results from initial proof-of-principle experiments will then be reported.

  18. Energy and Technology Review

    NASA Astrophysics Data System (ADS)

    Poggio, Andrew J.

    1988-10-01

    This issue of Energy and Technology Review contains: Neutron Penumbral Imaging of Laser-Fusion Targets--using our new penumbral-imaging diagnostic, we have obtained the first images that can be used to measure directly the deuterium-tritium burn region in laser-driven fusion targets; Computed Tomography for Nondestructive Evaluation--various computed tomography systems and computational techniques are used in nondestructive evaluation; Three-Dimensional Image Analysis for Studying Nuclear Chromatin Structure--we have developed an optic-electronic system for acquiring cross-sectional views of cell nuclei, and computer codes to analyze these images and reconstruct the three-dimensional structures they represent; Imaging in the Nuclear Test Program--advanced techniques produce images of unprecedented detail and resolution from Nevada Test Site data; and Computational X-Ray Holography--visible-light experiments and numerically simulated holograms test our ideas about an X-ray microscope for biological research.

  19. Regional tectonic analysis of Venus equatorial highlands and comparison with Earth-based Magellan radar images

    NASA Technical Reports Server (NTRS)

    Williams, David R.; Wetherill, George

    1993-01-01

    Research on regional tectonic analysis of Venus equatorial highlands and comparison with earth-based and Magellan radar images is presented. Over the past two years, the tectonic analysis of Venus performed centered on global properties of the planet, in order to understand fundamental aspects of the dynamics of the mantle and lithosphere of Venus. These include studies pertaining to the original constitutive and thermal character of the planet, as well as the evolution of Venus through time, and the present day tectonics. Parameterized convection models of the Earth and Venus were developed. The parameterized convection code was reformulated to model Venus with an initially hydrous mantle to determine how the cold-trap could affect the evolution of the planet.

  20. Single-wavelength based Thai jasmine rice identification with polynomial fitting function and neural network analysis

    NASA Astrophysics Data System (ADS)

    Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2013-06-01

    We previously showed that a combination of image thresholding, chain coding, elliptic Fourier descriptors, and artificial neural network analysis provided a low false acceptance rate (FAR) and a false rejection rate (FRR) of 11.0% and 19.0%, respectively, in identify Thai jasmine rice from three unwanted rice varieties. In this work, we highlight that only a polynomial function fitting on the determined chain code and the neural network analysis are highly sufficient in obtaining a very low FAR of < 3.0% and a very low 0.3% FRR for the separation of Thai jasmine rice from Chainat 1 (CNT1), Prathumtani 1 (PTT1), and Hom-Pitsanulok (HPSL) rice varieties. With this proposed approach, the analytical time is tremendously suppressed from 4,250 seconds down to 2 seconds, implying extremely high potential in practical deployment.

  1. On a Mathematical Theory of Coded Exposure

    DTIC Science & Technology

    2014-08-01

    formulae that give the MSE and SNR of the final crisp image 1. Assumes the Shannon-Whittaker framework that i) requires band limited (with a fre...represents the ideal crisp image, i.e., the image that one would observed if there were no noise whatsoever, no motion, with a perfect optical system...discrete. In addition, the image obtained by a coded exposure camera requires to undergo a deconvolution to get the final crisp image. Note that the

  2. Extending the imaging volume for biometric iris recognition.

    PubMed

    Narayanswamy, Ramkumar; Johnson, Gregory E; Silveira, Paulo E X; Wach, Hans B

    2005-02-10

    The use of the human iris as a biometric has recently attracted significant interest in the area of security applications. The need to capture an iris without active user cooperation places demands on the optical system. Unlike a traditional optical design, in which a large imaging volume is traded off for diminished imaging resolution and capacity for collecting light, Wavefront Coded imaging is a computational imaging technology capable of expanding the imaging volume while maintaining an accurate and robust iris identification capability. We apply Wavefront Coded imaging to extend the imaging volume of the iris recognition application.

  3. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  4. Numerical Electromagnetic Code (NEC)-Basic Scattering Code. Part 2. Code Manual

    DTIC Science & Technology

    1979-09-01

    imaging of source axes for magnetic source. Ax R VSOURC(1,1) + 9 VSOURC(1,2) + T VSOURC(1,3) 4pi = x VIMAG(I,1) + ^ VINAG (1,2)+ VIMAG(l,3) An =unit...VNC A. yt and z components of the end cap unit normal OUTPUT VARIABLE VINAG X.. Y, and z components defining thesource image coordinate system axesin

  5. Content and Effect of Children's Commercials.

    ERIC Educational Resources Information Center

    Solomon, Henry; And Others

    The 3 studies described in this paper focused on the image of the child in television advertising directed toward children between the ages of 2 and 11. Using an analysis code containing 16 operationally-defined categories, the first study analyzed the behaviors of children depicted in 37 commercials for toys, food, and clothing from Saturday…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao

    Sparx, a new environment for Cryo-EM image processing; Cryo-EM, Single particle reconstruction, principal component analysis; Hardware Req.: PC, MAC, Supercomputer, Mainframe, Multiplatform, Workstation. Software Req.: operating system is Unix; Compiler C++; type of files: source code, object library, executable modules, compilation instructions; sample problem input data. Location/transmission: http://sparx-em.org; User manual & paper: http://sparx-em.org;

  7. Extracting the Data From the LCM vk4 Formatted Output File

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less

  8. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  9. The InSAR Scientific Computing Environment

    NASA Technical Reports Server (NTRS)

    Rosen, Paul A.; Gurrola, Eric; Sacco, Gian Franco; Zebker, Howard

    2012-01-01

    We have developed a flexible and extensible Interferometric SAR (InSAR) Scientific Computing Environment (ISCE) for geodetic image processing. ISCE was designed from the ground up as a geophysics community tool for generating stacks of interferograms that lend themselves to various forms of time-series analysis, with attention paid to accuracy, extensibility, and modularity. The framework is python-based, with code elements rigorously componentized by separating input/output operations from the processing engines. This allows greater flexibility and extensibility in the data models, and creates algorithmic code that is less susceptible to unnecessary modification when new data types and sensors are available. In addition, the components support provenance and checkpointing to facilitate reprocessing and algorithm exploration. The algorithms, based on legacy processing codes, have been adapted to assume a common reference track approach for all images acquired from nearby orbits, simplifying and systematizing the geometry for time-series analysis. The framework is designed to easily allow user contributions, and is distributed for free use by researchers. ISCE can process data from the ALOS, ERS, EnviSAT, Cosmo-SkyMed, RadarSAT-1, RadarSAT-2, and TerraSAR-X platforms, starting from Level-0 or Level 1 as provided from the data source, and going as far as Level 3 geocoded deformation products. With its flexible design, it can be extended with raw/meta data parsers to enable it to work with radar data from other platforms

  10. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  11. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  12. Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.

    PubMed

    Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C

    2004-01-01

    Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.

  13. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  14. Teaching strategies for using projected images to develop conceptual understanding: Exploring discussion practices in computer simulation and static image-based lessons

    NASA Astrophysics Data System (ADS)

    Price, Norman T.

    The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active thinking. This mixed methods study analyzes teacher behavior in lessons using visual media about the particulate model of matter that were taught by three experienced middle school teachers. Each teacher taught one half of their students with lessons using static overheads and taught the other half with lessons using a projected dynamic simulation. The quantitative analysis of pre-post data found significant gain differences between the two image mode conditions, suggesting that the students who were assigned to the simulation condition learned more than students who were assigned to the overhead condition. Open coding was used to identify a set of eight image-based teaching strategies that teachers were using with visual displays. Fixed codes for this set of image-based discussion strategies were then developed and used to analyze video and transcripts of whole class discussions from 12 lessons. The image-based discussion strategies were refined over time in a set of three in-depth 2x2 comparative case studies of two teachers teaching one lesson topic with two image display modes. The comparative case study data suggest that the simulation mode may have offered greater affordances than the overhead mode for planning and enacting discussions. The 12 discussions were also coded for overall teacher student interaction patterns, such as presentation, IRE, and IRF. When teachers moved during a lesson from using no image to using either image mode, some teachers were observed asking more questions when the image was displayed while others asked many fewer questions. The changes in teacher student interaction patterns suggest that teachers vary on whether they consider the displayed image as a "tool-for-telling" and a "tool-for-asking." The study attempts to provide new descriptions of strategies teachers use to orchestrate image-based discussions designed to promote student engagement and reasoning in lessons with conceptual goals.

  15. Sandia Simple Particle Tracking (Sandia SPT) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen M.

    2015-06-15

    Sandia SPT is designed as software to accompany a book chapter being published a methods chapter which provides an introduction on how to label and track individual proteins. The Sandia Simple Particle Tracking code uses techniques common to the image processing community, where its value is that it facilitates implementing the methods described in the book chapter by providing the necessary open-source code. The code performs single particle spot detection (or segmentation and localization) followed by tracking (or connecting the detected particles into trajectories). The book chapter, which along with the headers in each file, constitutes the documentation for themore » code is: Anthony, S.M.; Carroll-Portillo, A.; Timlon, J.A., Dynamics and Interactions of Individual Proteins in the Membrane of Living Cells. In Anup K. Singh (Ed.) Single Cell Protein Analysis Methods in Molecular Biology. Springer« less

  16. Analysis on the optical aberration effect on spectral resolution of coded aperture spectroscopy

    NASA Astrophysics Data System (ADS)

    Hao, Peng; Chi, Mingbo; Wu, Yihui

    2017-10-01

    The coded aperture spectrometer can achieve high throughput and high spectral resolution by replacing the traditional single slit with two-dimensional array slits manufactured by MEMS technology. However, the sampling accuracy of coding spectrum image will be distorted due to the existence of system aberrations, machining error, fixing errors and so on, resulting in the declined spectral resolution. The influence factor of the spectral resolution come from the decode error, the spectral resolution of each column, and the column spectrum offset correction. For the Czerny-Turner spectrometer, the spectral resolution of each column most depend on the astigmatism, in this coded aperture spectroscopy, the uncorrected astigmatism does result in degraded performance. Some methods must be used to reduce or remove the limiting astigmatism. The curvature of field and the spectral curvature can be result in the spectrum revision errors.

  17. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System.

    PubMed

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-09-03

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.

  18. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System

    PubMed Central

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  19. Agile Multi-Scale Decompositions for Automatic Image Registration

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-01-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  20. Ultrasound Transducer and System for Real-Time Simultaneous Therapy and Diagnosis for Noninvasive Surgery of Prostate Tissue

    PubMed Central

    Jeong, Jong Seob; Chang, Jin Ho; Shung, K. Kirk

    2009-01-01

    For noninvasive treatment of prostate tissue using high intensity focused ultrasound (HIFU), this paper proposes a design of an integrated multi-functional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6 MHz array in the center row for imaging and two 4 MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA was verifeid. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured −6 dB and −20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be −48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded −6dB and −20dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be −40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy. PMID:19811994

  1. Ultrasound transducer and system for real-time simultaneous therapy and diagnosis for noninvasive surgery of prostate tissue.

    PubMed

    Jeong, Jong Seob; Chang, Jin Ho; Shung, K Kirk

    2009-09-01

    For noninvasive treatment of prostate tissue using high-intensity focused ultrasound this paper proposes a design of an integrated multifunctional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6-MHz array in the center row for imaging and two 4-MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center-row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, we verified whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured -6 dB and -20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be -48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded -6 dB and -20 dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be -40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy.

  2. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  3. The Statistical Consulting Center for Astronomy (SCCA)

    NASA Technical Reports Server (NTRS)

    Akritas, Michael

    2001-01-01

    The process by which raw astronomical data acquisition is transformed into scientifically meaningful results and interpretation typically involves many statistical steps. Traditional astronomy limits itself to a narrow range of old and familiar statistical methods: means and standard deviations; least-squares methods like chi(sup 2) minimization; and simple nonparametric procedures such as the Kolmogorov-Smirnov tests. These tools are often inadequate for the complex problems and datasets under investigations, and recent years have witnessed an increased usage of maximum-likelihood, survival analysis, multivariate analysis, wavelet and advanced time-series methods. The Statistical Consulting Center for Astronomy (SCCA) assisted astronomers with the use of sophisticated tools, and to match these tools with specific problems. The SCCA operated with two professors of statistics and a professor of astronomy working together. Questions were received by e-mail, and were discussed in detail with the questioner. Summaries of those questions and answers leading to new approaches were posted on the Web (www.state.psu.edu/ mga/SCCA). In addition to serving individual astronomers, the SCCA established a Web site for general use that provides hypertext links to selected on-line public-domain statistical software and services. The StatCodes site (www.astro.psu.edu/statcodes) provides over 200 links in the areas of: Bayesian statistics; censored and truncated data; correlation and regression, density estimation and smoothing, general statistics packages and information; image analysis; interactive Web tools; multivariate analysis; multivariate clustering and classification; nonparametric analysis; software written by astronomers; spatial statistics; statistical distributions; time series analysis; and visualization tools. StatCodes has received a remarkable high and constant hit rate of 250 hits/week (over 10,000/year) since its inception in mid-1997. It is of interest to scientists both within and outside of astronomy. The most popular sections are multivariate techniques, image analysis, and time series analysis. Hundreds of copies of the ASURV, SLOPES and CENS-TAU codes developed by SCCA scientists were also downloaded from the StatCodes site. In addition to formal SCCA duties, SCCA scientists continued a variety of related activities in astrostatistics, including refereeing of statistically oriented papers submitted to the Astrophysical Journal, talks in meetings including Feigelson's talk to science journalists entitled "The reemergence of astrostatistics" at the American Association for the Advancement of Science meeting, and published papers of astrostatistical content.

  4. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  5. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  6. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  7. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  8. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    PubMed

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  9. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  10. Infrastructure Upgrades to Support Model Longevity and New Applications: The Variable Infiltration Capacity Model Version 5.0 (VIC 5.0)

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Hamman, J.; Bohn, T. J.

    2015-12-01

    The Variable Infiltration Capacity (VIC) model is a macro-scale semi-distributed hydrologic model. VIC development began in the early 1990s and it has been used extensively, applied from basin to global scales. VIC has been applied in a many use cases, including the construction of hydrologic data sets, trend analysis, data evaluation and assimilation, forecasting, coupled climate modeling, and climate change impact analysis. Ongoing applications of the VIC model include the University of Washington's drought monitor and forecast systems, and NASA's land data assimilation systems. The development of VIC version 5.0 focused on reconfiguring the legacy VIC source code to support a wider range of modern modeling applications. The VIC source code has been moved to a public Github repository to encourage participation by the model development community-at-large. The reconfiguration has separated the physical core of the model from the driver, which is responsible for memory allocation, pre- and post-processing and I/O. VIC 5.0 includes four drivers that use the same physical model core: classic, image, CESM, and Python. The classic driver supports legacy VIC configurations and runs in the traditional time-before-space configuration. The image driver includes a space-before-time configuration, netCDF I/O, and uses MPI for parallel processing. This configuration facilitates the direct coupling of streamflow routing, reservoir, and irrigation processes within VIC. The image driver is the foundation of the CESM driver; which couples VIC to CESM's CPL7 and a prognostic atmosphere. Finally, we have added a Python driver that provides access to the functions and datatypes of VIC's physical core from a Python interface. This presentation demonstrates how reconfiguring legacy source code extends the life and applicability of a research model.

  11. Interactive QR code beautification with full background image embedding

    NASA Astrophysics Data System (ADS)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  12. Third order harmonic imaging for biological tissues using three phase-coded pulses.

    PubMed

    Ma, Qingyu; Gong, Xiufen; Zhang, Dong

    2006-12-22

    Compared to the fundamental and the second harmonic imaging, the third harmonic imaging shows significant improvements in image quality due to the better resolution, but it is degraded by the lower sound pressure and signal-to-noise ratio (SNR). In this study, a phase-coded pulse technique is proposed to selectively enhance the sound pressure of the third harmonic by 9.5 dB whereas the fundamental and the second harmonic components are efficiently suppressed and SNR is also increased by 4.7 dB. Based on the solution of the KZK nonlinear equation, the axial and lateral beam profiles of harmonics radiated from a planar piston transducer were theoretically simulated and experimentally examined. Finally, the third harmonic images using this technique were performed for several biological tissues and compared with the images obtained by the fundamental and the second harmonic imaging. Results demonstrate that the phase-coded pulse technique yields a dramatically cleaner and sharper contrast image.

  13. Infrared imaging - A validation technique for computational fluid dynamics codes used in STOVL applications

    NASA Technical Reports Server (NTRS)

    Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.

    1991-01-01

    The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.

  14. Neural networks for data compression and invariant image recognition

    NASA Technical Reports Server (NTRS)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  15. Objective definition of rosette shape variation using a combined computer vision and data mining approach.

    PubMed

    Camargo, Anyela; Papadopoulou, Dimitra; Spyropoulou, Zoi; Vlachonasios, Konstantinos; Doonan, John H; Gay, Alan P

    2014-01-01

    Computer-vision based measurements of phenotypic variation have implications for crop improvement and food security because they are intrinsically objective. It should be possible therefore to use such approaches to select robust genotypes. However, plants are morphologically complex and identification of meaningful traits from automatically acquired image data is not straightforward. Bespoke algorithms can be designed to capture and/or quantitate specific features but this approach is inflexible and is not generally applicable to a wide range of traits. In this paper, we have used industry-standard computer vision techniques to extract a wide range of features from images of genetically diverse Arabidopsis rosettes growing under non-stimulated conditions, and then used statistical analysis to identify those features that provide good discrimination between ecotypes. This analysis indicates that almost all the observed shape variation can be described by 5 principal components. We describe an easily implemented pipeline including image segmentation, feature extraction and statistical analysis. This pipeline provides a cost-effective and inherently scalable method to parameterise and analyse variation in rosette shape. The acquisition of images does not require any specialised equipment and the computer routines for image processing and data analysis have been implemented using open source software. Source code for data analysis is written using the R package. The equations to calculate image descriptors have been also provided.

  16. Modified Mean-Pyramid Coding Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Romer, Richard

    1996-01-01

    Modified mean-pyramid coding scheme requires transmission of slightly fewer data. Data-expansion factor reduced from 1/3 to 1/12. Schemes for progressive transmission of image data transmitted in sequence of frames in such way coarse version of image reconstructed after receipt of first frame and increasingly refined version of image reconstructed after receipt of each subsequent frame.

  17. Recent Trends in Imaging Use in Hospital Settings: Implications for Future Planning.

    PubMed

    Levin, David C; Parker, Laurence; Rao, Vijay M

    2017-03-01

    To compare trends in utilization rates of imaging in the three hospital-based settings where imaging is conducted. The nationwide Medicare Part B databases for 2004-2014 were used. All discretionary noninvasive diagnostic imaging (NDI) CPT codes were selected and grouped by modality. Procedure volumes of each code were available from the databases and converted to utilization rates per 1,000 Medicare enrollees. Medicare's place-of-service codes were used to identify imaging examinations done in hospital inpatients, hospital outpatient departments (HOPDs), and emergency departments (EDs). Trends were observed over the life of the study. Trendlines were strongly affected by code bundling in echocardiography in 2009, nuclear imaging in 2010, and CT in 2011. However, even aside from these artifactual effects, important trends could be discerned. Inpatient imaging utilization rates of all modalities are trending downward. In HOPDs, the utilization rate of conventional radiographic examinations (CREs) is declining but rates of CT, MRI, echocardiography, and noncardiac ultrasound (US) are increasing. In EDs, utilization rates of CREs, CT, and US are increasing. In the 3 years after 2011, when no further code bundling occurred, the total inpatient NDI utilization rate dropped 15%, whereas the rate in EDs increased 12% and that in HOPDs increased 1%. The trends in utilization of NDI in the three hospital-based settings where imaging occurs are distinctly different. Radiologists and others who are involved in deciding what kinds of equipment to purchase and where to locate it should be cognizant of these trends in making their decisions. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  18. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  19. FISH Finder: a high-throughput tool for analyzing FISH images

    PubMed Central

    Shirley, James W.; Ty, Sereyvathana; Takebayashi, Shin-ichiro; Liu, Xiuwen; Gilbert, David M.

    2011-01-01

    Motivation: Fluorescence in situ hybridization (FISH) is used to study the organization and the positioning of specific DNA sequences within the cell nucleus. Analyzing the data from FISH images is a tedious process that invokes an element of subjectivity. Automated FISH image analysis offers savings in time as well as gaining the benefit of objective data analysis. While several FISH image analysis software tools have been developed, they often use a threshold-based segmentation algorithm for nucleus segmentation. As fluorescence signal intensities can vary significantly from experiment to experiment, from cell to cell, and within a cell, threshold-based segmentation is inflexible and often insufficient for automatic image analysis, leading to additional manual segmentation and potential subjective bias. To overcome these problems, we developed a graphical software tool called FISH Finder to automatically analyze FISH images that vary significantly. By posing the nucleus segmentation as a classification problem, compound Bayesian classifier is employed so that contextual information is utilized, resulting in reliable classification and boundary extraction. This makes it possible to analyze FISH images efficiently and objectively without adjustment of input parameters. Additionally, FISH Finder was designed to analyze the distances between differentially stained FISH probes. Availability: FISH Finder is a standalone MATLAB application and platform independent software. The program is freely available from: http://code.google.com/p/fishfinder/downloads/list Contact: gilbert@bio.fsu.edu PMID:21310746

  20. A new security solution to JPEG using hyper-chaotic system and modified zigzag scan coding

    NASA Astrophysics Data System (ADS)

    Ji, Xiao-yong; Bai, Sen; Guo, Yu; Guo, Hui

    2015-05-01

    Though JPEG is an excellent compression standard of images, it does not provide any security performance. Thus, a security solution to JPEG was proposed in Zhang et al. (2014). But there are some flaws in Zhang's scheme and in this paper we propose a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding. By shuffling the identifiers of zigzag scan encoded sequence with hyper-chaotic sequence and accurately encrypting the certain coefficients which have little relationship with the correlation of the plain image in zigzag scan encoded domain, we achieve high compression performance and robust security simultaneously. Meanwhile we present and analyze the flaws in Zhang's scheme through theoretical analysis and experimental verification, and give the comparisons between our scheme and Zhang's. Simulation results verify that our method has better performance in security and efficiency.

  1. Mission science value-cost savings from the Advanced Imaging Communication System (AICS)

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1984-01-01

    An Advanced Imaging Communication System (AICS) was proposed in the mid-1970s as an alternative to the Voyager data/communication system architecture. The AICS achieved virtually error free communication with little loss in the downlink data rate by concatenating a powerful Reed-Solomon block code with the Voyager convolutionally coded, Viterbi decoded downlink channel. The clean channel allowed AICS sophisticated adaptive data compression techniques. Both Voyager and the Galileo mission have implemented AICS components, and the concatenated channel itself is heading for international standardization. An analysis that assigns a dollar value/cost savings to AICS mission performance gains is presented. A conservative value or savings of $3 million for Voyager, $4.5 million for Galileo, and as much as $7 to 9.5 million per mission for future projects such as the proposed Mariner Mar 2 series is shown.

  2. Impact of differences in the solar irradiance spectrum on surface reflectance retrieval with different radiative transfer codes

    NASA Technical Reports Server (NTRS)

    Staenz, K.; Williams, D. J.; Fedosejevs, G.; Teillet, P. M.

    1995-01-01

    Surface reflectance retrieval from imaging spectrometer data as acquired with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has become important for quantitative analysis. In order to calculate surface reflectance from remotely measured radiance, radiative transfer codes such as 5S and MODTRAN2 play an increasing role for removal of scattering and absorption effects of the atmosphere. Accurate knowledge of the exo-atmospheric solar irradiance (E(sub 0)) spectrum at the spectral resolution of the sensor is important for this purpose. The present study investigates the impact of differences in the solar irradiance function, as implemented in a modified version of 5S (M5S), 6S, and MODTRAN2, and as proposed by Green and Gao, on the surface reflectance retrieved from AVIRIS data. Reflectance measured in situ is used as a basis of comparison.

  3. ACSYNT - A standards-based system for parametric, computer aided conceptual design of aircraft

    NASA Technical Reports Server (NTRS)

    Jayaram, S.; Myklebust, A.; Gelhausen, P.

    1992-01-01

    A group of eight US aerospace companies together with several NASA and NAVY centers, led by NASA Ames Systems Analysis Branch, and Virginia Tech's CAD Laboratory agreed, through the assistance of Americal Technology Initiative, in 1990 to form the ACSYNT (Aircraft Synthesis) Institute. The Institute is supported by a Joint Sponsored Research Agreement to continue the research and development in computer aided conceptual design of aircraft initiated by NASA Ames Research Center and Virginia Tech's CAD Laboratory. The result of this collaboration, a feature-based, parametric computer aided aircraft conceptual design code called ACSYNT, is described. The code is based on analysis routines begun at NASA Ames in the early 1970's. ACSYNT's CAD system is based entirely on the ISO standard Programmer's Hierarchical Interactive Graphics System and is graphics-device independent. The code includes a highly interactive graphical user interface, automatically generated Hermite and B-Spline surface models, and shaded image displays. Numerous features to enhance aircraft conceptual design are described.

  4. Thermal analysis of combinatorial solid geometry models using SINDA

    NASA Technical Reports Server (NTRS)

    Gerencser, Diane; Radke, George; Introne, Rob; Klosterman, John; Miklosovic, Dave

    1993-01-01

    Algorithms have been developed using Monte Carlo techniques to determine the thermal network parameters necessary to perform a finite difference analysis on Combinatorial Solid Geometry (CSG) models. Orbital and laser fluxes as well as internal heat generation are modeled to facilitate satellite modeling. The results of the thermal calculations are used to model the infrared (IR) images of targets and assess target vulnerability. Sample analyses and validation are presented which demonstrate code products.

  5. Image processing for safety assessment in civil engineering.

    PubMed

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David

    2013-06-20

    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.

  6. Color-coded perfusion analysis of CEUS for pre-interventional diagnosis of microvascularisation in cases of vascular malformations.

    PubMed

    Teusch, V I; Wohlgemuth, W A; Piehler, A P; Jung, E M

    2014-01-01

    Aim of our pilot study was the application of a contrast-enhanced color-coded ultrasound perfusion analysis in patients with vascular malformations to quantify microcirculatory alterations. 28 patients (16 female, 12 male, mean age 24.9 years) with high flow (n = 6) or slow-flow (n = 22) malformations were analyzed before intervention. An experienced examiner performed a color-coded Doppler sonography (CCDS) and a Power Doppler as well as a contrast-enhanced ultrasound after intravenous bolus injection of 1 - 2.4 ml of a second-generation ultrasound contrast medium (SonoVue®, Bracco, Milan). The contrast-enhanced examination was documented as a cine sequence over 60 s. The quantitative analysis based on color-coded contrast-enhanced ultrasound (CEUS) images included percentage peak enhancement (%peak), time to peak (TTP), area under the curve (AUC), and mean transit time (MTT). No side effects occurred after intravenous contrast injection. The mean %peak in arteriovenous malformations was almost twice as high as in slow-flow-malformations. The area under the curve was 4 times higher in arteriovenous malformations compared to the mean value of other malformations. The mean transit time was 1.4 times higher in high-flow-malformations compared to slow-flow-malformations. There was no difference regarding the time to peak between the different malformation types. The comparison between all vascular malformation and surrounding tissue showed statistically significant differences for all analyzed data (%peak, TTP, AUC, MTT; p < 0.01). High-flow and slow-flow vascular malformations had statistically significant differences in %peak (p < 0.01), AUC analysis (p < 0.01), and MTT (p < 0.05). Color-coded perfusion analysis of CEUS seems to be a promising technique for the dynamic assessment of microvasculature in vascular malformations.

  7. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  8. Sodium 3D COncentration MApping (COMA 3D) using 23Na and proton MRI

    NASA Astrophysics Data System (ADS)

    Truong, Milton L.; Harrington, Michael G.; Schepkin, Victor D.; Chekmenev, Eduard Y.

    2014-10-01

    Functional changes of sodium 3D MRI signals were converted into millimolar concentration changes using an open-source fully automated MATLAB toolbox. These concentration changes are visualized via 3D sodium concentration maps, and they are overlaid over conventional 3D proton images to provide high-resolution co-registration for easy correlation of functional changes to anatomical regions. Nearly 5000/h concentration maps were generated on a personal computer (ca. 2012) using 21.1 T 3D sodium MRI brain images of live rats with spatial resolution of 0.8 × 0.8 × 0.8 mm3 and imaging matrices of 60 × 60 × 60. The produced concentration maps allowed for non-invasive quantitative measurement of in vivo sodium concentration in the normal rat brain as a functional response to migraine-like conditions. The presented work can also be applied to sodium-associated changes in migraine, cancer, and other metabolic abnormalities that can be sensed by molecular imaging. The MATLAB toolbox allows for automated image analysis of the 3D images acquired on the Bruker platform and can be extended to other imaging platforms. The resulting images are presented in a form of series of 2D slices in all three dimensions in native MATLAB and PDF formats. The following is provided: (a) MATLAB source code for image processing, (b) the detailed processing procedures, (c) description of the code and all sub-routines, (d) example data sets of initial and processed data. The toolbox can be downloaded at: http://www.vuiis.vanderbilt.edu/ truongm/COMA3D/.

  9. The analysis of image feature robustness using cometcloud

    PubMed Central

    Qi, Xin; Kim, Hyunjoo; Xing, Fuyong; Parashar, Manish; Foran, David J.; Yang, Lin

    2012-01-01

    The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval. PMID:23248759

  10. Current and future trends in marine image annotation software

    NASA Astrophysics Data System (ADS)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.

  11. Medicine, material science and security: the versatility of the coded-aperture approach.

    PubMed

    Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A

    2014-03-06

    The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.

  12. Lossless Compression of JPEG Coded Photo Collections.

    PubMed

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  13. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  14. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  15. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.

  16. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales.

    PubMed

    Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.

  17. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales

    PubMed Central

    Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482

  18. Preliminary Numerical and Experimental Analysis of the Spallation Phenomenon

    NASA Technical Reports Server (NTRS)

    Martin, Alexandre; Bailey, Sean C. C.; Panerai, Francesco; Davuluri, Raghava S. C.; Vazsonyi, Alexander R.; Zhang, Huaibao; Lippay, Zachary S.; Mansour, Nagi N.; Inman, Jennifer A.; Bathel, Brett F.; hide

    2015-01-01

    The spallation phenomenon was studied through numerical analysis using a coupled Lagrangian particle tracking code and a hypersonic aerothermodynamics computational fluid dynamics solver. The results show that carbon emission from spalled particles results in a significant modification of the gas composition of the post shock layer. Preliminary results from a test-campaign at the NASA Langley HYMETS facility are presented. Using an automated image processing of high-speed images, two-dimensional velocity vectors of the spalled particles were calculated. In a 30 second test at 100 W/cm2 of cold-wall heat-flux, more than 1300 particles were detected, with an average velocity of 102 m/s, and most frequent observed velocity of 60 m/s.

  19. Magnetohydrodynamic modelling of exploding foil initiators

    NASA Astrophysics Data System (ADS)

    Neal, William

    2015-06-01

    Magnetohydrodynamic (MHD) codes are currently being developed, and used, to predict the behaviour of electrically-driven flyer-plates. These codes are of particular interest to the design of exploding foil initiator (EFI) detonators but there is a distinct lack of comparison with high-fidelity experimental data. This study aims to compare a MHD code with a collection of temporally and spatially resolved diagnostics including PDV, dual-axis imaging and streak imaging. The results show the code's excellent representation of the flyer-plate launch and highlight features within the experiment that the model fails to capture.

  20. Massively Multithreaded Maxflow for Image Segmentation on the Cray XMT-2

    PubMed Central

    Bokhari, Shahid H.; Çatalyürek, Ümit V.; Gurcan, Metin N.

    2014-01-01

    SUMMARY Image segmentation is a very important step in the computerized analysis of digital images. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. The largest images we have run are 320002 pixels in size, which are well beyond the largest previously reported in the literature. PMID:25598745

  1. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  2. Development and evaluation of a Hadamard transform imaging spectrometer and a Hadamard transform thermal imager

    NASA Technical Reports Server (NTRS)

    Harwit, M.; Swift, R.; Wattson, R.; Decker, J.; Paganetti, R.

    1976-01-01

    A spectrometric imager and a thermal imager, which achieve multiplexing by the use of binary optical encoding masks, were developed. The masks are based on orthogonal, pseudorandom digital codes derived from Hadamard matrices. Spatial and/or spectral data is obtained in the form of a Hadamard transform of the spatial and/or spectral scene; computer algorithms are then used to decode the data and reconstruct images of the original scene. The hardware, algorithms and processing/display facility are described. A number of spatial and spatial/spectral images are presented. The achievement of a signal-to-noise improvement due to the signal multiplexing was also demonstrated. An analysis of the results indicates both the situations for which the multiplex advantage may be gained, and the limitations of the technique. A number of potential applications of the spectrometric imager are discussed.

  3. Mobility aid for the blind

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A project to develop an effective mobility aid for blind pedestrians which acquires consecutive images of the scenes before a moving pedestrian, which locates and identifies the pedestrian's path and potential obstacles in the path, which presents path and obstacle information to the pedestrian, and which operates in real-time is discussed. The mobility aid has three principal components: an image acquisition system, an image interpretation system, and an information presentation system. The image acquisition system consists of a miniature, solid-state TV camera which transforms the scene before the blind pedestrian into an image which can be received by the image interpretation system. The image interpretation system is implemented on a microprocessor which has been programmed to execute real-time feature extraction and scene analysis algorithms for locating and identifying the pedestrian's path and potential obstacles. Identity and location information is presented to the pedestrian by means of tactile coding and machine-generated speech.

  4. Quantum image encryption based on restricted geometric and color transformations

    NASA Astrophysics Data System (ADS)

    Song, Xian-Hua; Wang, Shen; Abd El-Latif, Ahmed A.; Niu, Xia-Mu

    2014-08-01

    A novel encryption scheme for quantum images based on restricted geometric and color transformations is proposed. The new strategy comprises efficient permutation and diffusion properties for quantum image encryption. The core idea of the permutation stage is to scramble the codes of the pixel positions through restricted geometric transformations. Then, a new quantum diffusion operation is implemented on the permutated quantum image based on restricted color transformations. The encryption keys of the two stages are generated by two sensitive chaotic maps, which can ensure the security of the scheme. The final step, measurement, is built by the probabilistic model. Experiments conducted on statistical analysis demonstrate that significant improvements in the results are in favor of the proposed approach.

  5. Quantitative analysis of image quality for acceptance and commissioning of an MRI simulator with a semiautomatic method.

    PubMed

    Chen, Xinyuan; Dai, Jianrong

    2018-05-01

    Magnetic Resonance Imaging (MRI) simulation differs from diagnostic MRI in purpose, technical requirements, and implementation. We propose a semiautomatic method for image acceptance and commissioning for the scanner, the radiofrequency (RF) coils, and pulse sequences for an MRI simulator. The ACR MRI accreditation large phantom was used for image quality analysis with seven parameters. Standard ACR sequences with a split head coil were adopted to examine the scanner's basic performance. The performance of simulation RF coils were measured and compared using the standard sequence with different clinical diagnostic coils. We used simulation sequences with simulation coils to test the quality of image and advanced performance of the scanner. Codes and procedures were developed for semiautomatic image quality analysis. When using standard ACR sequences with a split head coil, image quality passed all ACR recommended criteria. The image intensity uniformity with a simulation RF coil decreased about 34% compared with the eight-channel diagnostic head coil, while the other six image quality parameters were acceptable. Those two image quality parameters could be improved to more than 85% by built-in intensity calibration methods. In the simulation sequences test, the contrast resolution was sensitive to the FOV and matrix settings. The geometric distortion of simulation sequences such as T1-weighted and T2-weighted images was well-controlled in the isocenter and 10 cm off-center within a range of ±1% (2 mm). We developed a semiautomatic image quality analysis method for quantitative evaluation of images and commissioning of an MRI simulator. The baseline performances of simulation RF coils and pulse sequences have been established for routine QA. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. Fractal-Based Image Compression

    DTIC Science & Technology

    1989-09-01

    6. A Mercedes Benz symbol generated using an IFS code ................. 21 7. (a) U-A fern and (b) A-0 fern generated with RIFS codes...22 8. Construction of the Mercedes - Benz symbol using RIFS ................ 23 9. The regenerated perfect image of the Mercedes - Benz symbol using R IF...quite often, it cannot be done with a reasonable number of transforms. As an example, the Mercedes Benz symbol generated using an IFS code is illustrated

  7. Run-length encoding graphic rules, biochemically editable designs and steganographical numeric data embedment for DNA-based cryptographical coding system.

    PubMed

    Kawano, Tomonori

    2013-03-01

    There have been a wide variety of approaches for handling the pieces of DNA as the "unplugged" tools for digital information storage and processing, including a series of studies applied to the security-related area, such as DNA-based digital barcodes, water marks and cryptography. In the present article, novel designs of artificial genes as the media for storing the digitally compressed data for images are proposed for bio-computing purpose while natural genes principally encode for proteins. Furthermore, the proposed system allows cryptographical application of DNA through biochemically editable designs with capacity for steganographical numeric data embedment. As a model case of image-coding DNA technique application, numerically and biochemically combined protocols are employed for ciphering the given "passwords" and/or secret numbers using DNA sequences. The "passwords" of interest were decomposed into single letters and translated into the font image coded on the separate DNA chains with both the coding regions in which the images are encoded based on the novel run-length encoding rule, and the non-coding regions designed for biochemical editing and the remodeling processes revealing the hidden orientation of letters composing the original "passwords." The latter processes require the molecular biological tools for digestion and ligation of the fragmented DNA molecules targeting at the polymerase chain reaction-engineered termini of the chains. Lastly, additional protocols for steganographical overwriting of the numeric data of interests over the image-coding DNA are also discussed.

  8. A low noise stenography method for medical images with QR encoding of patient information

    NASA Astrophysics Data System (ADS)

    Patiño-Vanegas, Alberto; Contreras-Ortiz, Sonia H.; Martinez-Santos, Juan C.

    2017-03-01

    This paper proposes an approach to facilitate the process of individualization of patients from their medical images, without compromising the inherent confidentiality of medical data. The identification of a patient from a medical image is not often the goal of security methods applied to image records. Usually, any identification data is removed from shared records, and security features are applied to determine ownership. We propose a method for embedding a QR-code containing information that can be used to individualize a patient. This is done so that the image to be shared does not differ significantly from the original image. The QR-code is distributed in the image by changing several pixels according to a threshold value based on the average value of adjacent pixels surrounding the point of interest. The results show that the code can be embedded and later fully recovered with minimal changes in the UIQI index - less than 0.1% of different.

  9. Apparatus and method to achieve high-resolution microscopy with non-diffracting or refracting radiation

    DOEpatents

    Tobin, Jr., Kenneth W.; Bingham, Philip R.; Hawari, Ayman I.

    2012-11-06

    An imaging system employing a coded aperture mask having multiple pinholes is provided. The coded aperture mask is placed at a radiation source to pass the radiation through. The radiation impinges on, and passes through an object, which alters the radiation by absorption and/or scattering. Upon passing through the object, the radiation is detected at a detector plane to form an encoded image, which includes information on the absorption and/or scattering caused by the material and structural attributes of the object. The encoded image is decoded to provide a reconstructed image of the object. Because the coded aperture mask includes multiple pinholes, the radiation intensity is greater than a comparable system employing a single pinhole, thereby enabling a higher resolution. Further, the decoding of the encoded image can be performed to generate multiple images of the object at different distances from the detector plane. Methods and programs for operating the imaging system are also disclosed.

  10. Computational strategies for tire monitoring and analysis

    NASA Technical Reports Server (NTRS)

    Danielson, Kent T.; Noor, Ahmed K.; Green, James S.

    1995-01-01

    Computational strategies are presented for the modeling and analysis of tires in contact with pavement. A procedure is introduced for simple and accurate determination of tire cross-sectional geometric characteristics from a digitally scanned image. Three new strategies for reducing the computational effort in the finite element solution of tire-pavement contact are also presented. These strategies take advantage of the observation that footprint loads do not usually stimulate a significant tire response away from the pavement contact region. The finite element strategies differ in their level of approximation and required amount of computer resources. The effectiveness of the strategies is demonstrated by numerical examples of frictionless and frictional contact of the space shuttle Orbiter nose-gear tire. Both an in-house research code and a commercial finite element code are used in the numerical studies.

  11. Nonlinear response of lipid-shelled microbubbles to coded excitation: implications for noninvasive atherosclerosis imaging

    NASA Astrophysics Data System (ADS)

    Shekhar, Himanshu; Doyley, Marvin M.

    2013-03-01

    Nonlinear (subharmonic/harmonic) imaging with ultrasound contrast agents (UCA) could characterize the vasa vasorum, which could help assess the risk associated with atherosclerosis. However, the sensitivity and specificity of high-frequency nonlinear imaging must be improved to enable its clinical translation. The current excitation scheme employs sine-bursts — a strategy that requires high-peak pressures to produce strong nonlinear response from UCA. In this paper, chirp-coded excitation was evaluated to assess its ability to enhance the subharmonic and harmonic response of UCA. Acoustic measurements were conducted with a pair of single-element transducers at 10-MHz transmit frequencies to evaluate the subharmonic and harmonic response of Targestar-P® (Targeson Inc., San Diego, CA, USA), a commercially available phospholipid-encapsulated contrast agent. The results of this study demonstrated a 2 - 3 fold reduction in the subharmonic threshold, and a 4 - 14 dB increase in nonlinear signal-to-noise ratio, with chirp-coded excitation. Therefore, chirp-coded excitation could be well suited for improving the imaging performance of high-frequency harmonic and subharmonic imaging.

  12. Multiple description distributed image coding with side information for mobile wireless transmission

    NASA Astrophysics Data System (ADS)

    Wu, Min; Song, Daewon; Chen, Chang Wen

    2005-03-01

    Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.

  13. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  14. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data.

    PubMed

    Muir, Dylan R; Kampa, Björn M

    2014-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.

  15. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data

    PubMed Central

    Muir, Dylan R.; Kampa, Björn M.

    2015-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories1. PMID:25653614

  16. Public sentiment and discourse about Zika virus on Instagram.

    PubMed

    Seltzer, E K; Horst-Martz, E; Lu, M; Merchant, R M

    2017-09-01

    Social media have strongly influenced the awareness and perceptions of public health emergencies, and a considerable amount of social media content is now shared through images, rather than text alone. This content can impact preparedness and response due to the popularity and real-time nature of social media platforms. We sought to explore how the image-sharing platform Instagram is used for information dissemination and conversation during the current Zika outbreak. This was a retrospective review of publicly posted images about Zika on Instagram. Using the keyword '#zika' we identified 500 images posted on Instagram from May to August 2016. Images were coded by three reviewers and contextual information was collected for each image about sentiment, image type, content, audience, geography, reliability, and engagement. Of 500 images tagged with #zika, 342 (68%) contained content actually related to Zika. Of the 342 Zika-specific images, 299 were coded as 'health' and 193 were coded 'public interest'. Some images had multiple 'health' and 'public interest' codes. Health images tagged with #zika were primarily related to transmission (43%, 129/299) and prevention (48%, 145/299). Transmission-related posts were more often mosquito-human transmission (73%, 94/129) than human-human transmission (27%, 35/129). Mosquito bite prevention posts outnumbered safe sex prevention; (84%, 122/145) and (16%, 23/145) respectively. Images with a target audience were primarily aimed at women (95%, 36/38). Many posts (60%, 61/101) included misleading, incomplete, or unclear information about the virus. Additionally, many images expressed fear and negative sentiment, (79/156, 51%). Instagram can be used to characterize public sentiment and highlight areas of focus for public health, such as correcting misleading or incomplete information or expanding messages to reach diverse audiences. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  17. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  18. Incorporating 3-dimensional models in online articles.

    PubMed

    Cevidanes, Lucia H S; Ruellas, Antonio C O; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz

    2015-05-01

    The aims of this article are to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article's online version for viewing and downloading using the reader's software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader's software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. When submitting manuscripts, authors can now upload 3D models that will allow readers to interact with or download them. Such interaction with 3D models in online articles now will give readers and authors better understanding and visualization of the results. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  19. SU-F-T-62: Three-Dimensional Dosimetric Gamma Analysis for Impacts of Tissue Inhomogeneity Using Monte Carlo Simulation in Intracavitary Brachytheray for Cervix Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tran Thi Thao; Nakamoto, Takahiro; Shibayama, Yusuke

    Purpose: The aim of this study was to investigate the impacts of tissue inhomogeneity on dose distributions using a three-dimensional (3D) gamma analysis in cervical intracavitary brachytherapy using Monte Carlo (MC) simulations. Methods: MC simulations for comparison of dose calculations were performed in a water phantom and a series of CT images of a cervical cancer patient (stage: Ib; age: 27) by employing a MC code, Particle and Heavy Ion Transport Code System (PHIT) version 2.73. The {sup 192}Ir source was set at fifteen dwell positions, according to clinical practice, in an applicator consisting of a tandem and two ovoids.more » Dosimetric comparisons were performed for the dose distributions in the water phantom and CT images by using gamma index image and gamma pass rate (%). The gamma index is the minimum Euclidean distance between two 3D spatial dose distributions of the water phantom and CT images in a same space. The gamma pass rates (%) indicate the percentage of agreement points, which mean that two dose distributions are similar, within an acceptance criteria (3 mm/3%). The volumes of physical and clinical interests for the gamma analysis were a whole calculated volume and a region larger than t% of a dose (close to a target), respectively. Results: The gamma pass rates were 77.1% for a whole calculated volume and 92.1% for a region within 1% dose region. The differences of 7.7% to 22.9 % between two dose distributions in the water phantom and CT images were found around the applicator region and near the target. Conclusion: This work revealed the large difference on the dose distributions near the target in the presence of the tissue inhomogeneity. Therefore, the tissue inhomogeneity should be corrected in the dose calculation for clinical treatment.« less

  20. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  1. QR code optical encryption using spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.

    2017-02-01

    Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129  ×  129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.

  2. Development of ultra-high temperature material characterization capabilities using digital image correlation analysis

    NASA Astrophysics Data System (ADS)

    Cline, Julia Elaine

    2011-12-01

    Ultra-high temperature deformation measurements are required to characterize the thermo-mechanical response of material systems for thermal protection systems for aerospace applications. The use of conventional surface-contacting strain measurement techniques is not practical in elevated temperature conditions. Technological advancements in digital imaging provide impetus to measure full-field displacement and determine strain fields with sub-pixel accuracy by image processing. In this work, an Instron electromechanical axial testing machine with a custom-designed high temperature gripping mechanism is used to apply quasi-static tensile loads to graphite specimens heated to 2000°F (1093°C). Specimen heating via Joule effect is achieved and maintained with a custom-designed temperature control system. Images are captured at monotonically increasing load levels throughout the test duration using an 18 megapixel Canon EOS Rebel T2i digital camera with a modified Schneider Kreutznach telecentric lens and a combination of blue light illumination and narrow band-pass filter system. Images are processed using an open-source Matlab-based digital image correlation (DIC) code. Validation of source code is performed using Mathematica generated images with specified known displacement fields in order to gain confidence in accurate software tracking capabilities. Room temperature results are compared with extensometer readings. Ultra-high temperature strain measurements for graphite are obtained at low load levels, demonstrating the potential for non-contacting digital image correlation techniques to accurately determine full-field strain measurements at ultra-high temperature. Recommendations are given to improve the experimental set-up to achieve displacement field measurements accurate to 1/10 pixel and strain field accuracy of less than 2%.

  3. Regional and socioeconomic disparities in emergency department use of radiographic imaging for acute pediatric sinusitis.

    PubMed

    Sedaghat, Ahmad R; Cunningham, Michael J; Ishman, Stacey L

    2014-01-01

    Acute pediatric sinusitis (APS) is a common complication of pediatric upper respiratory tract infections. Children with all degrees of APS severity may present to emergency departments (EDs) for evaluation and management. This study was designed to analyze the use of imaging in APS presenting to U.S. EDs. A cross-sectional analysis of the 2008 National Emergency Department Sample database was performed. One hundred one thousand six hundred sixty children, aged ≤18 years, assigned at least one ICD9 code for APS were identified. Current procedural terminology codes for sinus plain film radiographs, computed tomography (CT), and magnetic resonance imaging identified children who underwent sinus imaging. Association of performance of sinus imaging was sought with multiple predictor variables including clinicodemographic and hospital characteristics. The use of any imaging was associated with older age (odds ratio [OR] = 1.07; p < 0.001), male gender (OR = 1.57; p < 0.001), and diagnosis of chronic rhinosinusitis (OR = 2.46; p < 0.001). Imaging was more common in metropolitan teaching (OR = 1.40;0 p < 0.001) and nonteaching (OR = 5.64; p < 0.001) hospitals. Markers of higher socioeconomic status--private health insurance (OR = 1.37; p < 0.001) and higher income level (OR = 1.96; p < 0.001)--were associated with greater use of imaging, especially CT scans. The use of ED imaging in APS is appropriately associated with factors known to be associated with APS complications. However, additional disparities with respect to regional and socioeconomic factors exist. Interventions to eliminate these health care disparities in use of imaging resources may lead to quality improvement in care and outcomes for APS.

  4. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing

    PubMed Central

    2017-01-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816

  5. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    PubMed

    Hosoya, Haruo; Hyvärinen, Aapo

    2017-07-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  6. An interactive toolbox for atlas-based segmentation and coding of volumetric images

    NASA Astrophysics Data System (ADS)

    Menegaz, G.; Luti, S.; Duay, V.; Thiran, J.-Ph.

    2007-03-01

    Medical imaging poses the great challenge of having compression algorithms that are lossless for diagnostic and legal reasons and yet provide high compression rates for reduced storage and transmission time. The images usually consist of a region of interest representing the part of the body under investigation surrounded by a "background", which is often noisy and not of diagnostic interest. In this paper, we propose a ROI-based 3D coding system integrating both the segmentation and the compression tools. The ROI is extracted by an atlas based 3D segmentation method combining active contours with information theoretic principles, and the resulting segmentation map is exploited for ROI based coding. The system is equipped with a GUI allowing the medical doctors to supervise the segmentation process and eventually reshape the detected contours at any point. The process is initiated by the user through the selection of either one pre-de.ned reference image or one image of the volume to be used as the 2D "atlas". The object contour is successively propagated from one frame to the next where it is used as the initial border estimation. In this way, the entire volume is segmented based on a unique 2D atlas. The resulting 3D segmentation map is exploited for adaptive coding of the different image regions. Two coding systems were considered: the JPEG3D standard and the 3D-SPITH. The evaluation of the performance with respect to both segmentation and coding proved the high potential of the proposed system in providing an integrated, low-cost and computationally effective solution for CAD and PAC systems.

  7. Stand-alone front-end system for high- frequency, high-frame-rate coded excitation ultrasonic imaging.

    PubMed

    Park, Jinhyoung; Hu, Changhong; Shung, K Kirk

    2011-12-01

    A stand-alone front-end system for high-frequency coded excitation imaging was implemented to achieve a wider dynamic range. The system included an arbitrary waveform amplifier, an arbitrary waveform generator, an analog receiver, a motor position interpreter, a motor controller and power supplies. The digitized arbitrary waveforms at a sampling rate of 150 MHz could be programmed and converted to an analog signal. The pulse was subsequently amplified to excite an ultrasound transducer, and the maximum output voltage level achieved was 120 V(pp). The bandwidth of the arbitrary waveform amplifier was from 1 to 70 MHz. The noise figure of the preamplifier was less than 7.7 dB and the bandwidth was 95 MHz. Phantoms and biological tissues were imaged at a frame rate as high as 68 frames per second (fps) to evaluate the performance of the system. During the measurement, 40-MHz lithium niobate (LiNbO(3)) single-element lightweight (<;0.28 g) transducers were utilized. The wire target measure- ment showed that the -6-dB axial resolution of a chirp-coded excitation was 50 μm and lateral resolution was 120 μm. The echo signal-to-noise ratios were found to be 54 and 65 dB for the short burst and coded excitation, respectively. The contrast resolution in a sphere phantom study was estimated to be 24 dB for the chirp-coded excitation and 15 dB for the short burst modes. In an in vivo study, zebrafish and mouse hearts were imaged. Boundaries of the zebrafish heart in the image could be differentiated because of the low-noise operation of the implemented system. In mouse heart images, valves and chambers could be readily visualized with the coded excitation.

  8. Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field

    NASA Astrophysics Data System (ADS)

    Rubin, D. M.; Chezar, H.

    2007-12-01

    Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.

  9. Voxel-Based Lesion Symptom Mapping of Coarse Coding and Suppression Deficits in Patients With Right Hemisphere Damage

    PubMed Central

    Tompkins, Connie A.; Meigh, Kimberly M.; Prat, Chantel S.

    2015-01-01

    Purpose This study examined right hemisphere (RH) neuroanatomical correlates of lexical–semantic deficits that predict narrative comprehension in adults with RH brain damage. Coarse semantic coding and suppression deficits were related to lesions by voxel-based lesion symptom mapping. Method Participants were 20 adults with RH cerebrovascular accidents. Measures of coarse coding and suppression deficits were computed from lexical decision reaction times at short (175 ms) and long (1000 ms) prime-target intervals. Lesions were drawn on magnetic resonance imaging images and through normalization were registered on an age-matched brain template. Voxel-based lesion symptom mapping analysis was applied to build a general linear model at each voxel. Z score maps were generated for each deficit, and results were interpreted using automated anatomical labeling procedures. Results A deficit in coarse semantic activation was associated with lesions to the RH posterior middle temporal gyrus, dorsolateral prefrontal cortex, and lenticular nuclei. A maintenance deficit for coarsely coded representations involved the RH temporal pole and dorsolateral prefrontal cortex more medially. Ineffective suppression implicated lesions to the RH inferior frontal gyrus and subcortical regions, as hypothesized, along with the rostral temporal pole. Conclusion Beyond their scientific implications, these lesion–deficit correspondences may help inform the clinical diagnosis and enhance decisions about candidacy for deficit-focused treatment to improve narrative comprehension in individuals with RH damage. PMID:26425785

  10. Voxel-Based Lesion Symptom Mapping of Coarse Coding and Suppression Deficits in Patients With Right Hemisphere Damage.

    PubMed

    Yang, Ying; Tompkins, Connie A; Meigh, Kimberly M; Prat, Chantel S

    2015-11-01

    This study examined right hemisphere (RH) neuroanatomical correlates of lexical-semantic deficits that predict narrative comprehension in adults with RH brain damage. Coarse semantic coding and suppression deficits were related to lesions by voxel-based lesion symptom mapping. Participants were 20 adults with RH cerebrovascular accidents. Measures of coarse coding and suppression deficits were computed from lexical decision reaction times at short (175 ms) and long (1000 ms) prime-target intervals. Lesions were drawn on magnetic resonance imaging images and through normalization were registered on an age-matched brain template. Voxel-based lesion symptom mapping analysis was applied to build a general linear model at each voxel. Z score maps were generated for each deficit, and results were interpreted using automated anatomical labeling procedures. A deficit in coarse semantic activation was associated with lesions to the RH posterior middle temporal gyrus, dorsolateral prefrontal cortex, and lenticular nuclei. A maintenance deficit for coarsely coded representations involved the RH temporal pole and dorsolateral prefrontal cortex more medially. Ineffective suppression implicated lesions to the RH inferior frontal gyrus and subcortical regions, as hypothesized, along with the rostral temporal pole. Beyond their scientific implications, these lesion-deficit correspondences may help inform the clinical diagnosis and enhance decisions about candidacy for deficit-focused treatment to improve narrative comprehension in individuals with RH damage.

  11. Glucose modulates food-related salience coding of midbrain neurons in humans.

    PubMed

    Ulrich, Martin; Endres, Felix; Kölle, Markus; Adolph, Oliver; Widenhorn-Müller, Katharina; Grön, Georg

    2016-12-01

    Although early rat studies demonstrated that administration of glucose diminishes dopaminergic midbrain activity, evidence in humans has been lacking so far. In the present functional magnetic resonance imaging study, glucose was intravenously infused in healthy human male participants while seeing images depicting low-caloric food (LC), high-caloric food (HC), and non-food (NF) during a food/NF discrimination task. Analysis of brain activation focused on the ventral tegmental area (VTA) as the origin of the mesolimbic system involved in salience coding. Under unmodulated fasting baseline conditions, VTA activation was greater during HC compared with LC food cues. Subsequent to infusion of glucose, this difference in VTA activation as a function of caloric load leveled off and even reversed. In a control group not receiving glucose, VTA activation during HC relative to LC cues remained stable throughout the course of the experiment. Similar treatment-specific patterns of brain activation were observed for the hypothalamus. The present findings show for the first time in humans that glucose infusion modulates salience coding mediated by the VTA. Hum Brain Mapp 37:4376-4384, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. DigitSeis: A New Digitization Software and its Application to the Harvard-Adam Dziewoński Observatory Collection

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Altoé, I. L.; Karamitrou, A.; Ishii, M.; Ishii, H.

    2015-12-01

    DigitSeis is a new open-source, interactive digitization software written in MATLAB that converts digital, raster images of analog seismograms to readily usable, discretized time series using image processing algorithms. DigitSeis automatically identifies and corrects for various geometrical distortions of seismogram images that are acquired through the original recording, storage, and scanning procedures. With human supervision, the software further identifies and classifies important features such as time marks and notes, corrects time-mark offsets from the main trace, and digitizes the combined trace with an analysis to obtain as accurate timing as possible. Although a large effort has been made to minimize the human input, DigitSeis provides interactive tools for challenging situations such as trace crossings and stains in the paper. The effectiveness of the software is demonstrated with the digitization of seismograms that are over half a century old from the Harvard-Adam Dziewoński observatory that is still in operation as a part of the Global Seismographic Network (station code HRV and network code IU). The spectral analysis of the digitized time series shows no spurious features that may be related to the occurrence of minute and hour marks. They also display signals associated with significant earthquakes, and a comparison of the spectrograms with modern recordings reveals similarities in the background noise.

  13. Evidence-Based Imaging Guidelines and Medicare Payment Policy

    PubMed Central

    Sistrom, Christopher L; McKay, Niccie L

    2008-01-01

    Objective This study examines the relationship between evidence-based appropriateness criteria for neurologic imaging procedures and Medicare payment determinations. The primary research question is whether Medicare is more likely to pay for imaging procedures as the level of appropriateness increases. Data Sources The American College of Radiology Appropriateness Criteria (ACRAC) for neurological imaging, ICD-9-CM codes, CPT codes, and payment determinations by the Medicare Part B carrier for Florida and Connecticut. Study Design Cross-sectional study of appropriateness criteria and Medicare Part B payment policy for neurological imaging. In addition to descriptive and bivariate statistics, multivariate logistic regression on payment determination (yes or no) was performed. Data Collection Methods The American College of Radiology Appropriateness Criteria (ACRAC) documents specific to neurological imaging, ICD-9-CM codes, and CPT codes were used to create 2,510 medical condition/imaging procedure combinations, with associated appropriateness scores (coded as low/middle/high). Principal Findings As the level of appropriateness increased, more medical condition/imaging procedure combinations were payable (low = 61 percent, middle = 70 percent, and high = 74 percent). Logistic regression indicated that the odds of a medical condition/imaging procedure combination with a middle level of appropriateness being payable was 48 percent higher than for an otherwise similar combination with a low appropriateness score (95 percent CI on odds ratio=1.19–1.84). The odds ratio for being payable between high and low levels of appropriateness was 2.25 (95 percent CI: 1.66–3.04). Conclusions Medicare could improve its payment determinations by taking advantage of existing clinical guidelines, appropriateness criteria, and other authoritative resources for evidence-based practice. Such an approach would give providers a financial incentive that is aligned with best-practice medicine. In particular, Medicare should review and update its payment policies to reflect current information on the appropriateness of alternative imaging procedures for the same medical condition. PMID:18454778

  14. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018

  15. A coded structured light system based on primary color stripe projection and monochrome imaging.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  16. Wavelet-based image compression using shuffling and bit plane correlation

    NASA Astrophysics Data System (ADS)

    Kim, Seungjong; Jeong, Jechang

    2000-12-01

    In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceglio, N.M.; George, E.V.; Brooks, K.M.

    The first successful demonstration of high resolution, tomographic imaging of a laboratory plasma using coded imaging techniques is reported. ZPCI has been used to image the x-ray emission from laser compressed DT filled microballoons. The zone plate camera viewed an x-ray spectral window extending from below 2 keV to above 6 keV. It exhibited a resolution approximately 8 ..mu..m, a magnification factor approximately 13, and subtended a radiation collection solid angle at the target approximately 10/sup -2/ sr. X-ray images using ZPCI were compared with those taken using a grazing incidence reflection x-ray microscope. The agreement was excellent. In addition,more » the zone plate camera produced tomographic images. The nominal tomographic resolution was approximately 75 ..mu..m. This allowed three dimensional viewing of target emission from a single shot in planar ''slices''. In addition to its tomographic capability, the great advantage of the coded imaging technique lies in its applicability to hard (greater than 10 keV) x-ray and charged particle imaging. Experiments involving coded imaging of the suprathermal x-ray and high energy alpha particle emission from laser compressed microballoon targets are discussed.« less

  18. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    PubMed

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  19. Potential end-to-end imaging information rate advantages of various alternative communication systems

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1978-01-01

    Various communication systems were considered which are required to transmit both imaging and a typically error sensitive, class of data called general science/engineering (gse) over a Gaussian channel. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an Advanced Imaging Communication System (AICS) which exhibits the rather significant potential advantages of sophisticated data compression coupled with powerful yet practical channel coding.

  20. Digital visual communications using a Perceptual Components Architecture

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1991-01-01

    The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.

  1. Knee X-ray image analysis method for automated detection of Osteoarthritis

    PubMed Central

    Shamir, Lior; Ling, Shari M.; Scott, William W.; Bos, Angelo; Orlov, Nikita; Macura, Tomasz; Eckley, D. Mark; Ferrucci, Luigi; Goldberg, Ilya G.

    2008-01-01

    We describe a method for automated detection of radiographic Osteoarthritis (OA) in knee X-ray images. The detection is based on the Kellgren-Lawrence classification grades, which correspond to the different stages of OA severity. The classifier was built using manually classified X-rays, representing the first four KL grades (normal, doubtful, minimal and moderate). Image analysis is performed by first identifying a set of image content descriptors and image transforms that are informative for the detection of OA in the X-rays, and assigning weights to these image features using Fisher scores. Then, a simple weighted nearest neighbor rule is used in order to predict the KL grade to which a given test X-ray sample belongs. The dataset used in the experiment contained 350 X-ray images classified manually by their KL grades. Experimental results show that moderate OA (KL grade 3) and minimal OA (KL grade 2) can be differentiated from normal cases with accuracy of 91.5% and 80.4%, respectively. Doubtful OA (KL grade 1) was detected automatically with a much lower accuracy of 57%. The source code developed and used in this study is available for free download at www.openmicroscopy.org. PMID:19342330

  2. Compound image segmentation of published biomedical figures.

    PubMed

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  3. Parametric color coding of digital subtraction angiography.

    PubMed

    Strother, C M; Bender, F; Deuerling-Zheng, Y; Royalty, K; Pulfer, K A; Baumgart, J; Zellerhoff, M; Aagaard-Kienitz, B; Niemann, D B; Lindstrom, M L

    2010-05-01

    Color has been shown to facilitate both visual search and recognition tasks. It was our purpose to examine the impact of a color-coding algorithm on the interpretation of 2D-DSA acquisitions by experienced and inexperienced observers. Twenty-six 2D-DSA acquisitions obtained as part of routine clinical care from subjects with a variety of cerebrovascular disease processes were selected from an internal data base so as to include a variety of disease states (aneurysms, AVMs, fistulas, stenosis, occlusions, dissections, and tumors). Three experienced and 3 less experienced observers were each shown the acquisitions on a prerelease version of a commercially available double-monitor workstation (XWP, Siemens Healthcare). Acquisitions were presented first as a subtracted image series and then as a single composite color-coded image of the entire acquisition. Observers were then asked a series of questions designed to assess the value of the color-coded images for the following purposes: 1) to enhance their ability to make a diagnosis, 2) to have confidence in their diagnosis, 3) to plan a treatment, and 4) to judge the effect of a treatment. The results were analyzed by using 1-sample Wilcoxon tests. Color-coded images enhanced the ease of evaluating treatment success in >40% of cases (P < .0001). They also had a statistically significant impact on treatment planning, making planning easier in >20% of the cases (P = .0069). In >20% of the examples, color-coding made diagnosis and treatment planning easier for all readers (P < .0001). Color-coding also increased the confidence of diagnosis compared with the use of DSA alone (P = .056). The impact of this was greater for the naïve readers than for the expert readers. At no additional cost in x-ray dose or contrast medium, color-coding of DSA enhanced the conspicuity of findings on DSA images. It was particularly useful in situations in which there was a complex flow pattern and in evaluation of pre- and posttreatment acquisitions. Its full potential remains to be defined.

  4. An Experimental and CFD Study of a Supersonic Coaxial Jet

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; White, J. A.

    2001-01-01

    A supersonic coaxial jet facility is designed and experimental data are acquired suitable for the validation of CFD codes employed in the analysis of high-speed air-breathing engines. The center jet is of a light gas, the coflow jet is of air, and the mixing layer between them is compressible. The jet flow field is characterized using schlieren imaging, surveys with pitot, total temperature and gas sampling probes, and RELIEF velocimetry. VULCAN, a structured grid CFD code, is used to solve for the nozzle and jet flow, and the results are compared to the experiment for several variations of the kappa - omega turbulence model

  5. Measuring single-cell gene expression dynamics in bacteria using fluorescence time-lapse microscopy

    PubMed Central

    Young, Jonathan W; Locke, James C W; Altinok, Alphan; Rosenfeld, Nitzan; Bacarian, Tigran; Swain, Peter S; Mjolsness, Eric; Elowitz, Michael B

    2014-01-01

    Quantitative single-cell time-lapse microscopy is a powerful method for analyzing gene circuit dynamics and heterogeneous cell behavior. We describe the application of this method to imaging bacteria by using an automated microscopy system. This protocol has been used to analyze sporulation and competence differentiation in Bacillus subtilis, and to quantify gene regulation and its fluctuations in individual Escherichia coli cells. The protocol involves seeding and growing bacteria on small agarose pads and imaging the resulting microcolonies. Images are then reviewed and analyzed using our laboratory's custom MATLAB analysis code, which segments and tracks cells in a frame-to-frame method. This process yields quantitative expression data on cell lineages, which can illustrate dynamic expression profiles and facilitate mathematical models of gene circuits. With fast-growing bacteria, such as E. coli or B. subtilis, image acquisition can be completed in 1 d, with an additional 1–2 d for progressing through the analysis procedure. PMID:22179594

  6. Hexagonal Uniformly Redundant Arrays (HURAs) for scintillator based coded aperture neutron imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamage, K.A.A.; Zhou, Q.

    2015-07-01

    A series of Monte Carlo simulations have been conducted, making use of the EJ-426 neutron scintillator detector, to investigate the potential of using hexagonal uniformly redundant arrays (HURAs) for scintillator based coded aperture neutron imaging. This type of scintillator material has a low sensitivity to gamma rays, therefore, is of particular use in a system with a source that emits both neutrons and gamma rays. The simulations used an AmBe source, neutron images have been produced using different coded-aperture materials (boron- 10, cadmium-113 and gadolinium-157) and location error has also been estimated. In each case the neutron image clearly showsmore » the location of the source with a relatively small location error. Neutron images with high resolution can be easily used to identify and locate nuclear materials precisely in nuclear security and nuclear decommissioning applications. (authors)« less

  7. Integrated image data and medical record management for rare disease registries. A general framework and its instantiation to theGerman Calciphylaxis Registry.

    PubMed

    Deserno, Thomas M; Haak, Daniel; Brandenburg, Vincent; Deserno, Verena; Classen, Christoph; Specht, Paula

    2014-12-01

    Especially for investigator-initiated research at universities and academic institutions, Internet-based rare disease registries (RDR) are required that integrate electronic data capture (EDC) with automatic image analysis or manual image annotation. We propose a modular framework merging alpha-numerical and binary data capture. In concordance with the Office of Rare Diseases Research recommendations, a requirement analysis was performed based on several RDR databases currently hosted at Uniklinik RWTH Aachen, Germany. With respect to the study management tool that is already successfully operating at the Clinical Trial Center Aachen, the Google Web Toolkit was chosen with Hibernate and Gilead connecting a MySQL database management system. Image and signal data integration and processing is supported by Apache Commons FileUpload-Library and ImageJ-based Java code, respectively. As a proof of concept, the framework is instantiated to the German Calciphylaxis Registry. The framework is composed of five mandatory core modules: (1) Data Core, (2) EDC, (3) Access Control, (4) Audit Trail, and (5) Terminology as well as six optional modules: (6) Binary Large Object (BLOB), (7) BLOB Analysis, (8) Standard Operation Procedure, (9) Communication, (10) Pseudonymization, and (11) Biorepository. Modules 1-7 are implemented in the German Calciphylaxis Registry. The proposed RDR framework is easily instantiated and directly integrates image management and analysis. As open source software, it may assist improved data collection and analysis of rare diseases in near future.

  8. Streaming Multiframe Deconvolutions on GPUs

    NASA Astrophysics Data System (ADS)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  9. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  10. Automating cell detection and classification in human brain fluorescent microscopy images using dictionary learning and sparse coding.

    PubMed

    Alegro, Maryana; Theofilas, Panagiotis; Nguy, Austin; Castruita, Patricia A; Seeley, William; Heinsen, Helmut; Ushizima, Daniela M; Grinberg, Lea T

    2017-04-15

    Immunofluorescence (IF) plays a major role in quantifying protein expression in situ and understanding cell function. It is widely applied in assessing disease mechanisms and in drug discovery research. Automation of IF analysis can transform studies using experimental cell models. However, IF analysis of postmortem human tissue relies mostly on manual interaction, often subjected to low-throughput and prone to error, leading to low inter and intra-observer reproducibility. Human postmortem brain samples challenges neuroscientists because of the high level of autofluorescence caused by accumulation of lipofuscin pigment during aging, hindering systematic analyses. We propose a method for automating cell counting and classification in IF microscopy of human postmortem brains. Our algorithm speeds up the quantification task while improving reproducibility. Dictionary learning and sparse coding allow for constructing improved cell representations using IF images. These models are input for detection and segmentation methods. Classification occurs by means of color distances between cells and a learned set. Our method successfully detected and classified cells in 49 human brain images. We evaluated our results regarding true positive, false positive, false negative, precision, recall, false positive rate and F1 score metrics. We also measured user-experience and time saved compared to manual countings. We compared our results to four open-access IF-based cell-counting tools available in the literature. Our method showed improved accuracy for all data samples. The proposed method satisfactorily detects and classifies cells from human postmortem brain IF images, with potential to be generalized for applications in other counting tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. NASA Tech Briefs, March 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Improved Instrument for Detecting Water and Ice in Soil; Real-Time Detection of Dust Devils from Pressure Readings; Determining Surface Roughness in Urban Areas Using Lidar Data; DSN Data Visualization Suite; Hamming and Accumulator Codes Concatenated with MPSK or QAM; Wide-Angle-Scanning Reflectarray Antennas Actuated by MEMS; Biasable Subharmonic Membrane Mixer for 520 to 600 GHz; Hardware Implementation of Serially Concatenated PPM Decoder; Symbolic Processing Combined with Model-Based Reasoning; Presentation Extensions of the SOAP; Spreadsheets for Analyzing and Optimizing Space Missions; Processing Ocean Images to Detect Large Drift Nets; Alternative Packaging for Back-Illuminated Imagers; Diamond Machining of an Off-Axis Biconic Aspherical Mirror; Laser Ablation Increases PEM/Catalyst Interfacial Area; Damage Detection and Self-Repair in Inflatable/Deployable Structures; Polyimide/Glass Composite High-Temperature Insulation; Nanocomposite Strain Gauges Having Small TCRs; Quick-Connect Windowed Non-Stick Penetrator Tips for Rapid Sampling; Modeling Unsteady Cavitation and Dynamic Loads in Turbopumps; Continuous-Flow System Produces Medical-Grade Water; Discrimination of Spore-Forming Bacilli Using spoIVA; nBn Infrared Detector Containing Graded Absorption Layer; Atomic References for Measuring Small Accelerations; Ultra-Broad-Band Optical Parametric Amplifier or Oscillator; Particle-Image Velocimeter Having Large Depth of Field; Enhancing SERS by Means of Supramolecular Charge Transfer; Improving 3D Wavelet-Based Compression of Hyperspectral Images; Improved Signal Chains for Readout of CMOS Imagers; SOI CMOS Imager with Suppression of Cross-Talk; Error-Rate Bounds for Coded PPM on a Poisson Channel; Biomorphic Multi-Agent Architecture for Persistent Computing; and Using Covariance Analysis to Assess Pointing Performance.

  12. Software manual for operating particle displacement tracking data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.

  13. An image based information system - Architecture for correlating satellite and topological data bases

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1978-01-01

    The paper describes the development of an image based information system and its use to process a Landsat thematic map showing land use or land cover in conjunction with a census tract polygon file to produce a tabulation of land use acreages per census tract. The system permits the efficient cross-tabulation of two or more geo-coded data sets, thereby setting the stage for the practical implementation of models of diffusion processes or cellular transformation. Characteristics of geographic information systems are considered, and functional requirements, such as data management, geocoding, image data management, and data analysis are discussed. The system is described, and the potentialities of its use are examined.

  14. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  15. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  16. Visual communication with retinex coding.

    PubMed

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  17. Visual Communication with Retinex Coding

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  18. Lessons Learned through the Development and Publication of AstroImageJ

    NASA Astrophysics Data System (ADS)

    Collins, Karen

    2018-01-01

    As lead author of the scientific image processing software package AstroImageJ (AIJ), I will discuss the reasoning behind why we decided to release AIJ to the public, and the lessons we learned related to the development, publication, distribution, and support of AIJ. I will also summarize the AIJ code language selection, code documentation and testing approaches, code distribution, update, and support facilities used, and the code citation and licensing decisions. Since AIJ was initially developed as part of my graduate research and was my first scientific open source software publication, many of my experiences and difficulties encountered may parallel those of others new to scientific software publication. Finally, I will discuss the benefits and disadvantages of releasing scientific software that I now recognize after having AIJ in the public domain for more than five years.

  19. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  20. A hybrid LBG/lattice vector quantizer for high quality image coding

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)

    1991-01-01

    It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.

  1. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  2. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  3. Image analysis method for the measurement of water saturation in a two-dimensional experimental flow tank

    NASA Astrophysics Data System (ADS)

    Belfort, Benjamin; Weill, Sylvain; Lehmann, François

    2017-07-01

    A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.

  4. Run-length encoding graphic rules, biochemically editable designs and steganographical numeric data embedment for DNA-based cryptographical coding system

    PubMed Central

    Kawano, Tomonori

    2013-01-01

    There have been a wide variety of approaches for handling the pieces of DNA as the “unplugged” tools for digital information storage and processing, including a series of studies applied to the security-related area, such as DNA-based digital barcodes, water marks and cryptography. In the present article, novel designs of artificial genes as the media for storing the digitally compressed data for images are proposed for bio-computing purpose while natural genes principally encode for proteins. Furthermore, the proposed system allows cryptographical application of DNA through biochemically editable designs with capacity for steganographical numeric data embedment. As a model case of image-coding DNA technique application, numerically and biochemically combined protocols are employed for ciphering the given “passwords” and/or secret numbers using DNA sequences. The “passwords” of interest were decomposed into single letters and translated into the font image coded on the separate DNA chains with both the coding regions in which the images are encoded based on the novel run-length encoding rule, and the non-coding regions designed for biochemical editing and the remodeling processes revealing the hidden orientation of letters composing the original “passwords.” The latter processes require the molecular biological tools for digestion and ligation of the fragmented DNA molecules targeting at the polymerase chain reaction-engineered termini of the chains. Lastly, additional protocols for steganographical overwriting of the numeric data of interests over the image-coding DNA are also discussed. PMID:23750303

  5. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    PubMed

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.

  6. Stimulus features coded by single neurons of a macaque body category selective patch.

    PubMed

    Popivanov, Ivo D; Schyns, Philippe G; Vogels, Rufin

    2016-04-26

    Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons' responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category).

  7. Stimulus features coded by single neurons of a macaque body category selective patch

    PubMed Central

    Popivanov, Ivo D.; Schyns, Philippe G.; Vogels, Rufin

    2016-01-01

    Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons’ responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category). PMID:27071095

  8. Applications of the JPEG standard in a medical environment

    NASA Astrophysics Data System (ADS)

    Wittenberg, Ulrich

    1993-10-01

    JPEG is a very versatile image coding and compression standard for single images. Medical images make a higher demand on image quality and precision than the usual 'pretty pictures'. In this paper the potential applications of the various JPEG coding modes in a medical environment are evaluated. Due to legal reasons the lossless modes are especially interesting. The spatial modes are equally important because medical data may well exceed the maximum of 12 bit precision allowed for the DCT modes. The performance of the spatial predictors is investigated. From the users point of view the progressive modes, which provide a fast but coarse approximation of the final image, reduce the subjective time one has to wait for it, so they also reduce the user's frustration. Even the lossy modes will find some applications, but they have to be handled with care, because repeated lossy coding and decoding leads to a degradation of the image quality. The amount of this degradation is investigated. The JPEG standard alone is not sufficient for a PACS because it does not store enough additional data such as creation data or details of the imaging modality. Therefore it will be an imbedded coding format in standards like TIFF or ACR/NEMA. It is concluded that the JPEG standard is versatile enough to match the requirements of the medical community.

  9. Filtering, Coding, and Compression with Malvar Wavelets

    DTIC Science & Technology

    1993-12-01

    speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates

  10. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  11. Portrayal of Tanning, Clothing Fashion and Shade Use in Australian Women's Magazines, 1987-2005

    ERIC Educational Resources Information Center

    Dixon, Helen; Dobbinson, Suzanne; Wakefield, Melanie; Jamsen, Kris; McLeod, Kim

    2008-01-01

    To examine modelling of outcomes relevant to sun protection in Australian women's magazines, content analysis was performed on 538 spring and summer issues of popular women's magazines from 1987 to 2005. A total of 4949 full-colour images of Caucasian females were coded for depth of tan, extent of clothing cover, use of shade and setting. Logistic…

  12. Efficient and Robust Signal Approximations

    DTIC Science & Technology

    2009-05-01

    otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse

  13. Complementary-encoding holographic associative memory using a photorefractive crystal

    NASA Astrophysics Data System (ADS)

    Yuan, ShiFu; Wu, Minxian; Yan, Yingbai; Jin, Guofan

    1996-06-01

    We present a holographic implementation of accurate associative memory with only one holographic memory system. In the implementation, the stored and test images are coded by using complementary-encoding method. The recalled complete image is also a coded image that can be decoded with a decoding mask to get an original image or its complement image. The experiment shows that the complementary encoding can efficiently increase the addressing accuracy in a simple way. Instead of the above complementary-encoding method, a scheme that uses complementary area-encoding method is also proposed for the holographic implementation of gray-level image associative memory with accurate addressing.

  14. Kinetic Simulation and Energetic Neutral Atom Imaging of the Magnetosphere

    NASA Technical Reports Server (NTRS)

    Fok, Mei-Ching H.

    2011-01-01

    Advanced simulation tools and measurement techniques have been developed to study the dynamic magnetosphere and its response to drivers in the solar wind. The Comprehensive Ring Current Model (CRCM) is a kinetic code that solves the 3D distribution in space, energy and pitch-angle information of energetic ions and electrons. Energetic Neutral Atom (ENA) imagers have been carried in past and current satellite missions. Global morphology of energetic ions were revealed by the observed ENA images. We have combined simulation and ENA analysis techniques to study the development of ring current ions during magnetic storms and substorms. We identify the timing and location of particle injection and loss. We examine the evolution of ion energy and pitch-angle distribution during different phases of a storm. In this talk we will discuss the findings from our ring current studies and how our simulation and ENA analysis tools can be applied to the upcoming TRIO-CINAMA mission.

  15. Disability in physical education textbooks: an analysis of image content.

    PubMed

    Táboas-Pais, María Inés; Rey-Cao, Ana

    2012-10-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted to the requirements of this study with additional categories. The variables were camera angle, gender, type of physical activity, field of practice, space, and level. Univariate and bivariate descriptive analyses were also carried out. The Pearson chi-square statistic was used to identify associations between the variables. Results showed a noticeable imbalance between people with disabilities and people without disabilities, and women with disabilities were less frequently represented than men with disabilities. People with disabilities were depicted as participating in a very limited variety of segregated, competitive, and elite sports activities.

  16. [Construction and application of special analysis database of geoherbs based on 3S technology].

    PubMed

    Guo, Lan-ping; Huang, Lu-qi; Lv, Dong-mei; Shao, Ai-juan; Wang, Jian

    2007-09-01

    In this paper,the structures, data sources, data codes of "the spacial analysis database of geoherbs" based 3S technology are introduced, and the essential functions of the database, such as data management, remote sensing, spacial interpolation, spacial statistics, spacial analysis and developing are described. At last, two examples for database usage are given, the one is classification and calculating of NDVI index of remote sensing image in geoherbal area of Atractylodes lancea, the other one is adaptation analysis of A. lancea. These indicate that "the spacial analysis database of geoherbs" has bright prospect in spacial analysis of geoherbs.

  17. Space-time encoding for high frame rate ultrasound imaging.

    PubMed

    Misaridis, Thanassis X; Jensen, Jørgen A

    2002-05-01

    Frame rate in ultrasound imaging can be dramatically increased by using sparse synthetic transmit aperture (STA) beamforming techniques. The two main drawbacks of the method are the low signal-to-noise ratio (SNR) and the motion artifacts, that degrade the image quality. In this paper we propose a spatio-temporal encoding for STA imaging based on simultaneous transmission of two quasi-orthogonal tapered linear FM signals. The excitation signals are an up- and a down-chirp with frequency division and a cross-talk of -55 dB. The received signals are first cross-correlated with the appropriate code, then spatially decoded and finally beamformed for each code, yielding two images per emission. The spatial encoding is a Hadamard encoding previously suggested by Chiao et al. [in: Proceedings of the IEEE Ultrasonics Symposium, 1997, p. 1679]. The Hadamard matrix has half the size of the transmit element groups, due to the orthogonality of the temporal encoded wavefronts. Thus, with this method, the frame rate is doubled compared to previous systems. Another advantage is the utilization of temporal codes which are more robust to attenuation. With the proposed technique it is possible to obtain images dynamically focused in both transmit and receive with only two firings. This reduces the problem of motion artifacts. The method has been tested with extensive simulations using Field II. Resolution and SNR are compared with uncoded STA imaging and conventional phased-array imaging. The range resolution remains the same for coded STA imaging with four emissions and is slightly degraded for STA imaging with two emissions due to the -55 dB cross-talk between the signals. The additional proposed temporal encoding adds more than 15 dB on the SNR gain, yielding a SNR at the same order as in phased-array imaging.

  18. Quantitative analysis of brain magnetic resonance imaging for hepatic encephalopathy

    NASA Astrophysics Data System (ADS)

    Syh, Hon-Wei; Chu, Wei-Kom; Ong, Chin-Sing

    1992-06-01

    High intensity lesions around ventricles have recently been observed in T1-weighted brain magnetic resonance images for patients suffering hepatic encephalopathy. The exact etiology that causes magnetic resonance imaging (MRI) gray scale changes has not been totally understood. The objective of our study was to investigate, through quantitative means, (1) the amount of changes to brain white matter due to the disease process, and (2) the extent and distribution of these high intensity lesions, since it is believed that the abnormality may not be entirely limited to the white matter only. Eleven patients with proven haptic encephalopathy and three normal persons without any evidence of liver abnormality constituted our current data base. Trans-axial, sagittal, and coronal brain MRI were obtained on a 1.5 Tesla scanner. All processing was carried out on a microcomputer-based image analysis system in an off-line manner. Histograms were decomposed into regular brain tissues and lesions. Gray scale ranges coded as lesion were then brought back to original images to identify distribution of abnormality. Our results indicated the disease process involved pallidus, mesencephalon, and subthalamic regions.

  19. Alternatively Constrained Dictionary Learning For Image Superresolution.

    PubMed

    Lu, Xiaoqiang; Yuan, Yuan; Yan, Pingkun

    2014-03-01

    Dictionaries are crucial in sparse coding-based algorithm for image superresolution. Sparse coding is a typical unsupervised learning method to study the relationship between the patches of high-and low-resolution images. However, most of the sparse coding methods for image superresolution fail to simultaneously consider the geometrical structure of the dictionary and the corresponding coefficients, which may result in noticeable superresolution reconstruction artifacts. In other words, when a low-resolution image and its corresponding high-resolution image are represented in their feature spaces, the two sets of dictionaries and the obtained coefficients have intrinsic links, which has not yet been well studied. Motivated by the development on nonlocal self-similarity and manifold learning, a novel sparse coding method is reported to preserve the geometrical structure of the dictionary and the sparse coefficients of the data. Moreover, the proposed method can preserve the incoherence of dictionary entries and provide the sparse coefficients and learned dictionary from a new perspective, which have both reconstruction and discrimination properties to enhance the learning performance. Furthermore, to utilize the model of the proposed method more effectively for single-image superresolution, this paper also proposes a novel dictionary-pair learning method, which is named as two-stage dictionary training. Extensive experiments are carried out on a large set of images comparing with other popular algorithms for the same purpose, and the results clearly demonstrate the effectiveness of the proposed sparse representation model and the corresponding dictionary learning algorithm.

  20. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  1. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  2. Future trends in image coding

    NASA Astrophysics Data System (ADS)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  3. Geometric and radiometric preprocessing of airborne visible/infrared imaging spectrometer (AVIRIS) data in rugged terrain for quantitative data analysis

    NASA Technical Reports Server (NTRS)

    Meyer, Peter; Green, Robert O.; Staenz, Karl; Itten, Klaus I.

    1994-01-01

    A geocoding procedure for remotely sensed data of airborne systems in rugged terrain is affected by several factors: buffeting of the aircraft by turbulence, variations in ground speed, changes in altitude, attitude variations, and surface topography. The current investigation was carried out with an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene of central Switzerland (Rigi) from NASA's Multi Aircraft Campaign (MAC) in Europe (1991). The parametric approach reconstructs for every pixel the observation geometry based on the flight line, aircraft attitude, and surface topography. To utilize the data for analysis of materials on the surface, the AVIRIS data are corrected to apparent reflectance using algorithms based on MODTRAN (moderate resolution transfer code).

  4. Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data

    NASA Astrophysics Data System (ADS)

    Palumbo, Francesco; D'Enza, Alfonso Iodice

    The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.

  5. Frapid: achieving full automation of FRAP for chemical probe validation

    PubMed Central

    Yapp, Clarence; Rogers, Catherine; Savitsky, Pavel; Philpott, Martin; Müller, Susanne

    2016-01-01

    Fluorescence Recovery After Photobleaching (FRAP) is an established method for validating chemical probes against the chromatin reading bromodomains, but so far requires constant human supervision. Here, we present Frapid, an automated open source code implementation of FRAP that fully handles cell identification through fuzzy logic analysis, drug dispensing with a custom-built fluid handler, image acquisition & analysis, and reporting. We successfully tested Frapid on 3 bromodomains as well as on spindlin1 (SPIN1), a methyl lysine binder, for the first time. PMID:26977352

  6. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

  7. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  8. SETI-EC: SETI Encryption Code

    NASA Astrophysics Data System (ADS)

    Heller, René

    2018-03-01

    The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.

  9. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  10. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  11. Quality optimized medical image information hiding algorithm that employs edge detection and data coding.

    PubMed

    Al-Dmour, Hayat; Al-Ani, Ahmed

    2016-04-01

    The present work has the goal of developing a secure medical imaging information system based on a combined steganography and cryptography technique. It attempts to securely embed patient's confidential information into his/her medical images. The proposed information security scheme conceals coded Electronic Patient Records (EPRs) into medical images in order to protect the EPRs' confidentiality without affecting the image quality and particularly the Region of Interest (ROI), which is essential for diagnosis. The secret EPR data is converted into ciphertext using private symmetric encryption method. Since the Human Visual System (HVS) is less sensitive to alterations in sharp regions compared to uniform regions, a simple edge detection method has been introduced to identify and embed in edge pixels, which will lead to an improved stego image quality. In order to increase the embedding capacity, the algorithm embeds variable number of bits (up to 3) in edge pixels based on the strength of edges. Moreover, to increase the efficiency, two message coding mechanisms have been utilized to enhance the ±1 steganography. The first one, which is based on Hamming code, is simple and fast, while the other which is known as the Syndrome Trellis Code (STC), is more sophisticated as it attempts to find a stego image that is close to the cover image through minimizing the embedding impact. The proposed steganography algorithm embeds the secret data bits into the Region of Non Interest (RONI), where due to its importance; the ROI is preserved from modifications. The experimental results demonstrate that the proposed method can embed large amount of secret data without leaving a noticeable distortion in the output image. The effectiveness of the proposed algorithm is also proven using one of the efficient steganalysis techniques. The proposed medical imaging information system proved to be capable of concealing EPR data and producing imperceptible stego images with minimal embedding distortions compared to other existing methods. In order to refrain from introducing any modifications to the ROI, the proposed system only utilizes the Region of Non Interest (RONI) in embedding the EPR data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Sedimentology of Martian Gravels from Mardi Twilight Imaging: Techniques

    NASA Technical Reports Server (NTRS)

    Garvin, James B.; Malin, Michael C.; Minitti, M. E.

    2014-01-01

    Quantitative sedimentologic analysis of gravel surfaces dominated by pebble-sized clasts has been employed in an effort to untangle aspects of the provenance of surface sediments on Mars using Curiosity's MARDI nadir-viewing camera operated at twilight Images have been systematically acquired since sol 310 providing a representative sample of gravel-covered surfaces since the rover departed the Shaler region. The MARDI Twilight imaging dataset offers approximately 1 millimeter spatial resolution (slightly out of focus) for patches beneath the rover that cover just under 1 m2 in area, under illumination that makes clast size and inter-clast spacing analysis relatively straightforward using semi- automated codes developed for use with nadir images. Twilight images are utilized for these analyses in order to reduce light scattering off dust deposited on the front MARDI lens element during the terminal stages of Curiosity's entry, descent and landing. Such scattering is worse when imaging bright, directly-illuminated surfaces; twilight imaging times yield diffusely-illuminated surfaces that improve the clarity of the resulting MARDI product. Twilight images are obtained between 10-30 minutes after local sunset, governed by the timing of the end of the no-heat window for the camera. Techniques were also utilized to examine data terrestrial locations (the Kau Desert in Hawaii and near Askja Caldera in Iceland). Methods employed include log hyperbolic size distribution (LHD) analysis and Delauney Triangulation (DT) inter-clast spacing analysis. This work extends the initial results reported in Yingst et al., that covered the initial landing zone, to the Rapid-Transit Route (RTR) towards Mount Sharp.

  13. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    NASA Astrophysics Data System (ADS)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  14. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  15. Class of near-perfect coded apertures

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1977-01-01

    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method.

  16. Hard X-ray imaging from Explorer

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Murray, S. S.

    1981-01-01

    Coded aperture X-ray detectors were applied to obtain large increases in sensitivity as well as angular resolution. A hard X-ray coded aperture detector concept is described which enables very high sensitivity studies persistent hard X-ray sources and gamma ray bursts. Coded aperture imaging is employed so that approx. 2 min source locations can be derived within a 3 deg field of view. Gamma bursts were located initially to within approx. 2 deg and X-ray/hard X-ray spectra and timing, as well as precise locations, derived for possible burst afterglow emission. It is suggested that hard X-ray imaging should be conducted from an Explorer mission where long exposure times are possible.

  17. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  18. Validation of a Monte Carlo code system for grid evaluation with interference effect on Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Abel; White, Graeme L.; Davidson, Rob

    2018-02-01

    Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.

  19. Color image analysis of contaminants and bacteria transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric

    1997-10-01

    Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.

  20. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  1. Optimization of wavefront coding imaging system using heuristic algorithms

    NASA Astrophysics Data System (ADS)

    González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Zermeño-Loreto, O.

    2017-08-01

    Wavefront Coding (WFC) systems make use of an aspheric Phase-Mask (PM) and digital image processing to extend the Depth of Field (EDoF) of computational imaging systems. For years, several kinds of PM have been designed to produce a point spread function (PSF) near defocus-invariant. In this paper, the optimization of the phase deviation parameter is done by means of genetic algorithms (GAs). In this, the merit function minimizes the mean square error (MSE) between the diffraction limited Modulated Transfer Function (MTF) and the MTF of the system that is wavefront coded with different misfocus. WFC systems were simulated using the cubic, trefoil, and 4 Zernike polynomials phase-masks. Numerical results show defocus invariance aberration in all cases. Nevertheless, the best results are obtained by using the trefoil phase-mask, because the decoded image is almost free of artifacts.

  2. Deep Constrained Siamese Hash Coding Network and Load-Balanced Locality-Sensitive Hashing for Near Duplicate Image Detection.

    PubMed

    Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen

    2018-09-01

    We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.

  3. TH-AB-209-10: Breast Cancer Identification Through X-Ray Coherent Scatter Spectral Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapadia, A; Morris, R; Albanese, K

    Purpose: We have previously described the development and testing of a coherent-scatter spectral imaging system for identification of cancer. Our prior evaluations were performed using either tissue surrogate phantoms or formalin-fixed tissue obtained from pathology. Here we present the first results from a scatter imaging study using fresh breast tumor tissues obtained through surgical excision. Methods: A coherent-scatter imaging system was built using a clinical X-ray tube, photon counting detectors, and custom-designed coded-apertures. System performance was characterized using calibration phantoms of biological materials. Fresh breast tumors were obtained from patients undergoing mastectomy and lumpectomy surgeries for breast cancer. Each specimenmore » was vacuum-sealed, scanned using the scatter imaging system, and then sent to pathology for histological workup. Scatter images were generated separately for each tissue specimen and analyzed to identify voxels containing malignant tissue. The images were compared against histological analysis (H&E + pathologist identification of tumors) to assess the match between scatter-based and histological diagnosis. Results: In all specimens scanned, the scatter images showed the location of cancerous regions within the specimen. The detection and classification was performed through automated spectral matching without the need for manual intervention. The scatter spectra corresponding to cancer tissue were found to be in agreement with those reported in literature. Inter-patient variability was found to be within limits reported in literature. The scatter images showed agreement with pathologist-identified regions of cancer. Spatial resolution for this configuration of the scanner was determined to be 2–3 mm, and the total scan time for each specimen was under 15 minutes. Conclusion: This work demonstrates the utility of coherent scatter imaging in identifying cancer based on the scatter properties of the tissue. It presents the first results from coherent scatter imaging of fresh (unfixed) breast tissue using our coded-aperture scatter imaging approach for cancer identification.« less

  4. Transmission line based thermoacoustic imaging of small animals

    NASA Astrophysics Data System (ADS)

    Omar, Murad; Kellnberger, Stephan; Sergiadis, George; Razansky, Daniel; Ntziachristos, Vasilis

    2013-06-01

    We have generated high resolution images of RF-Contrast in small animals using nearfield thermoacoustic system. This enables us to see some anatomical features of a mouse such as the heart, the spine and the boundary. OCIS codes: (000.0000) General; (000.0000) General [8-pt. type. For codes, see www.opticsinfobase.org/submit/ocis.

  5. Method for Assessment of Changes in the Width of Cracks in Cement Composites with Use of Computer Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Tomczak, Kamil; Jakubowski, Jacek; Fiołek, Przemysław

    2017-06-01

    Crack width measurement is an important element of research on the progress of self-healing cement composites. Due to the nature of this research, the method of measuring the width of cracks and their changes over time must meet specific requirements. The article presents a novel method of measuring crack width based on images from a scanner with an optical resolution of 6400 dpi, subject to initial image processing in the ImageJ development environment and further processing and analysis of results. After registering a series of images of the cracks at different times using SIFT conversion (Scale-Invariant Feature Transform), a dense network of line segments is created in all images, intersecting the cracks perpendicular to the local axes. Along these line segments, brightness profiles are extracted, which are the basis for determination of crack width. The distribution and rotation of the line of intersection in a regular layout, automation of transformations, management of images and profiles of brightness, and data analysis to determine the width of cracks and their changes over time are made automatically by own code in the ImageJ and VBA environment. The article describes the method, tests on its properties, sources of measurement uncertainty. It also presents an example of application of the method in research on autogenous self-healing of concrete, specifically the ability to reduce a sample crack width and its full closure within 28 days of the self-healing process.

  6. Practical Implementation of Prestack Kirchhoff Time Migration on a General Purpose Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Liu, Guofeng; Li, Chun

    2016-08-01

    In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[ nx][ ny][ nh][ nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.

  7. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  8. Visual communication - Information and fidelity. [of images

    NASA Technical Reports Server (NTRS)

    Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1993-01-01

    This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.

  9. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  10. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    PubMed

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  11. Report of AAPM Task Group 162: Software for planar image quality metrology.

    PubMed

    Samei, Ehsan; Ikejimba, Lynda C; Harrawood, Brian P; Rong, John; Cunningham, Ian A; Flynn, Michael J

    2018-02-01

    The AAPM Task Group 162 aimed to provide a standardized approach for the assessment of image quality in planar imaging systems. This report offers a description of the approach as well as the details of the resultant software bundle to measure detective quantum efficiency (DQE) as well as its basis components and derivatives. The methodology and the associated software include the characterization of the noise power spectrum (NPS) from planar images acquired under specific acquisition conditions, modulation transfer function (MTF) using an edge test object, the DQE, and effective DQE (eDQE). First, a methodological framework is provided to highlight the theoretical basis of the work. Then, a step-by-step guide is included to assist in proper execution of each component of the code. Lastly, an evaluation of the method is included to validate its accuracy against model-based and experimental data. The code was built using a Macintosh OSX operating system. The software package contains all the source codes to permit an experienced user to build the suite on a Linux or other *nix type system. The package further includes manuals and sample images and scripts to demonstrate use of the software for new users. The results of the code are in close alignment with theoretical expectations and published results of experimental data. The methodology and the software package offered in AAPM TG162 can be used as baseline for characterization of inherent image quality attributes of planar imaging systems. © 2017 American Association of Physicists in Medicine.

  12. Drawing dynamical and parameters planes of iterative families and methods.

    PubMed

    Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R

    2013-01-01

    The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).

  13. Parallel optical image addition and subtraction in a dynamic photorefractive memory by phase-code multiplexing

    NASA Astrophysics Data System (ADS)

    Denz, Cornelia; Dellwig, Thilo; Lembcke, Jan; Tschudi, Theo

    1996-02-01

    We propose and demonstrate experimentally a method for utilizing a dynamic phase-encoded photorefractive memory to realize parallel optical addition, subtraction, and inversion operations of stored images. The phase-encoded holographic memory is realized in photorefractive BaTiO3, storing eight images using WalshHadamard binary phase codes and an incremental recording procedure. By subsampling the set of reference beams during the recall operation, the selectivity of the phase address is decreased, allowing one to combine images in such a way that different linear combination of the images can be realized at the output of the memory.

  14. High-frequency Total Focusing Method (TFM) imaging in strongly attenuating materials with the decomposition of the time reversal operator associated with orthogonal coded excitations

    NASA Astrophysics Data System (ADS)

    Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire

    2017-02-01

    In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.

  15. Demonstration of the CDMA-mode CAOS smart camera.

    PubMed

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  16. Method for measuring the focal spot size of an x-ray tube using a coded aperture mask and a digital detector.

    PubMed

    Russo, Paolo; Mettivier, Giovanni

    2011-04-01

    The goal of this study is to evaluate a new method based on a coded aperture mask combined with a digital x-ray imaging detector for measurements of the focal spot sizes of diagnostic x-ray tubes. Common techniques for focal spot size measurements employ a pinhole camera, a slit camera, or a star resolution pattern. The coded aperture mask is a radiation collimator consisting of a large number of apertures disposed on a predetermined grid in an array, through which the radiation source is imaged onto a digital x-ray detector. The method of the coded mask camera allows one to obtain a one-shot accurate and direct measurement of the two dimensions of the focal spot (like that for a pinhole camera) but at a low tube loading (like that for a slit camera). A large number of small apertures in the coded mask operate as a "multipinhole" with greater efficiency than a single pinhole, but keeping the resolution of a single pinhole. X-ray images result from the multiplexed output on the detector image plane of such a multiple aperture array, and the image of the source is digitally reconstructed with a deconvolution algorithm. Images of the focal spot of a laboratory x-ray tube (W anode: 35-80 kVp; focal spot size of 0.04 mm) were acquired at different geometrical magnifications with two different types of digital detector (a photon counting hybrid silicon pixel detector with 0.055 mm pitch and a flat panel CMOS digital detector with 0.05 mm pitch) using a high resolution coded mask (type no-two-holes-touching modified uniformly redundant array) with 480 0.07 mm apertures, designed for imaging at energies below 35 keV. Measurements with a slit camera were performed for comparison. A test with a pinhole camera and with the coded mask on a computed radiography mammography unit with 0.3 mm focal spot was also carried out. The full width at half maximum focal spot sizes were obtained from the line profiles of the decoded images, showing a focal spot of 0.120 mm x 0.105 mm at 35 kVp and M = 6.1, with a detector entrance exposure as low as 1.82 mR (0.125 mA s tube load). The slit camera indicated a focal spot of 0.112 mm x 0.104 mm at 35 kVp and M = 3.15, with an exposure at the detector of 72 mR. Focal spot measurements with the coded mask could be performed up to 80 kVp. Tolerance to angular misalignment with the reference beam up to 7 degrees in in-plane rotations and 1 degrees deg in out-of-plane rotations was observed. The axial distance of the focal spot from the coded mask could also be determined. It is possible to determine the beam intensity via measurement of the intensity of the decoded image of the focal spot and via a calibration procedure. Coded aperture masks coupled to a digital area detector produce precise determinations of the focal spot of an x-ray tube with reduced tube loading and measurement time, coupled to a large tolerance in the alignment of the mask.

  17. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  18. Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding.

    PubMed

    Zhang, Xuncai; Han, Feng; Niu, Ying

    2017-01-01

    With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule's inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis.

  19. Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding

    PubMed Central

    2017-01-01

    With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule's inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis. PMID:28912802

  20. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

Top