Science.gov

Sample records for fractal image coding

  1. Partial iterated function system-based fractal image coding

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Yu, Ying Lin

    1996-06-01

    A recent trend in computer graphics and image processing has been to use iterated function system (IFS) to generate and describe images. Barnsley et al. presented the conception of fractal image compression and Jacquin was the first to propose a fully automatic gray scale still image coding algorithm. This paper introduces a generalization of basic IFS, leading to a conception of partial iterated function system (PIFS). A PIFS operator is contractive under certain conditions and when it is applied to generate an image, only part of it is actually iteratedly applied. PIFS provides us a flexible way to combine fractal coding with other image coding techniques and many specific algorithms can be derived from it. On the basis of PIFS, we implement a partial fractal block coding (PFBC) algorithm and compare it with basic IFS based fractal block coding algorithm. Experimental results show that coding efficiency is improved and computation time is reduced while image fidelity does not degrade very much.

  2. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  3. Fractal images induce fractal pupil dilations and constrictions.

    PubMed

    Moon, P; Muday, J; Raynor, S; Schirillo, J; Boydston, C; Fairbanks, M S; Taylor, R P

    2014-09-01

    Fractals are self-similar structures or patterns that repeat at increasingly fine magnifications. Research has revealed fractal patterns in many natural and physiological processes. This article investigates pupillary size over time to determine if their oscillations demonstrate a fractal pattern. We predict that pupil size over time will fluctuate in a fractal manner and this may be due to either the fractal neuronal structure or fractal properties of the image viewed. We present evidence that low complexity fractal patterns underlie pupillary oscillations as subjects view spatial fractal patterns. We also present evidence implicating the autonomic nervous system's importance in these patterns. Using the variational method of the box-counting procedure we demonstrate that low complexity fractal patterns are found in changes within pupil size over time in millimeters (mm) and our data suggest that these pupillary oscillation patterns do not depend on the fractal properties of the image viewed.

  4. Fractal images induce fractal pupil dilations and constrictions.

    PubMed

    Moon, P; Muday, J; Raynor, S; Schirillo, J; Boydston, C; Fairbanks, M S; Taylor, R P

    2014-09-01

    Fractals are self-similar structures or patterns that repeat at increasingly fine magnifications. Research has revealed fractal patterns in many natural and physiological processes. This article investigates pupillary size over time to determine if their oscillations demonstrate a fractal pattern. We predict that pupil size over time will fluctuate in a fractal manner and this may be due to either the fractal neuronal structure or fractal properties of the image viewed. We present evidence that low complexity fractal patterns underlie pupillary oscillations as subjects view spatial fractal patterns. We also present evidence implicating the autonomic nervous system's importance in these patterns. Using the variational method of the box-counting procedure we demonstrate that low complexity fractal patterns are found in changes within pupil size over time in millimeters (mm) and our data suggest that these pupillary oscillation patterns do not depend on the fractal properties of the image viewed. PMID:24978815

  5. The Language of Fractals.

    ERIC Educational Resources Information Center

    Jurgens, Hartmut; And Others

    1990-01-01

    The production and application of images based on fractal geometry are described. Discussed are fractal language groups, fractal image coding, and fractal dialects. Implications for these applications of geometry to mathematics education are suggested. (CW)

  6. Engineering images designed by fractal subdivision scheme.

    PubMed

    Mustafa, Ghulam; Bari, Mehwish; Jamil, Saba

    2016-01-01

    This paper is concerned with the modeling of engineering images by the fractal properties of 6-point binary interpolating scheme. Association between the fractal behavior of the limit curve/surface and the parameter is obtained. The relationship between the subdivision parameter and the fractal dimension of the limit fractal curve of subdivision fractal is also presented. Numerical examples and visual demonstrations show that 6-point scheme is good choice for the generation of fractals for the modeling of fractal antennas, bearings, garari's and rock etc. PMID:27652066

  7. A Novel Fractal Coding Method Based on M-J Sets

    PubMed Central

    Sun, Yuanyuan; Xu, Rudan; Chen, Lina; Kong, Ruiqing; Hu, Xiaopeng

    2014-01-01

    In this paper, we present a novel fractal coding method with the block classification scheme based on a shared domain block pool. In our method, the domain block pool is called dictionary and is constructed from fractal Julia sets. The image is encoded by searching the best matching domain block with the same BTC (Block Truncation Coding) value in the dictionary. The experimental results show that the scheme is competent both in encoding speed and in reconstruction quality. Particularly for large images, the proposed method can avoid excessive growth of the computational complexity compared with the traditional fractal coding algorithm. PMID:25010686

  8. Pre-Service Teachers' Concept Images on Fractal Dimension

    ERIC Educational Resources Information Center

    Karakus, Fatih

    2016-01-01

    The analysis of pre-service teachers' concept images can provide information about their mental schema of fractal dimension. There is limited research on students' understanding of fractal and fractal dimension. Therefore, this study aimed to investigate the pre-service teachers' understandings of fractal dimension based on concept image. The…

  9. Fractal Movies.

    ERIC Educational Resources Information Center

    Osler, Thomas J.

    1999-01-01

    Because fractal images are by nature very complex, it can be inspiring and instructive to create the code in the classroom and watch the fractal image evolve as the user slowly changes some important parameter or zooms in and out of the image. Uses programming language that permits the user to store and retrieve a graphics image as a disk file.…

  10. Fractal steganography using artificially generated images

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Susmilch, Johanna M.

    2008-04-01

    Steganography is the art of hiding information in a cover image so it is not readily apparent to a third party observer. There exist a variety of fractal steganographic methods for embedding information in an image. The main contribution of the previous work was to embed data by modifying the original, pre-existing image or embedding an encrypted datastream in an image. In this paper we propose a new fractal steganographic method. The fractal parameters of the image are altered by the steganographic data while the image is generated. This results in an image that is generated with the data already hidden in it and not added after the fact. We show that the input parameters of the algorithm, such as the type of fractal and number of iterations, will serve as a simple secret key for extracting the hidden information. We explain how the capacity of the image is affected by the variation of the parameters. We also demonstrate how standard steganographic detection algorithms perform against the generated images and analyze their capability to detect information hidden with this new technique

  11. Pyramidal fractal dimension for high resolution images.

    PubMed

    Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2016-07-01

    Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images.

  12. Pyramidal fractal dimension for high resolution images

    NASA Astrophysics Data System (ADS)

    Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2016-07-01

    Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024 ×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images.

  13. Pyramidal fractal dimension for high resolution images.

    PubMed

    Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2016-07-01

    Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images. PMID:27475069

  14. Fractality of pulsatile flow in speckle images

    NASA Astrophysics Data System (ADS)

    Nemati, M.; Kenjeres, S.; Urbach, H. P.; Bhattacharya, N.

    2016-05-01

    The scattering of coherent light from a system with underlying flow can be used to yield essential information about dynamics of the process. In the case of pulsatile flow, there is a rapid change in the properties of the speckle images. This can be studied using the standard laser speckle contrast and also the fractality of images. In this paper, we report the results of experiments performed to study pulsatile flow with speckle images, under different experimental configurations to verify the robustness of the techniques for applications. In order to study flow under various levels of complexity, the measurements were done for three in-vitro phantoms and two in-vivo situations. The pumping mechanisms were varied ranging from mechanical pumps to the human heart for the in vivo case. The speckle images were analyzed using the techniques of fractal dimension and speckle contrast analysis. The results of these techniques for the various experimental scenarios were compared. The fractal dimension is a more sensitive measure to capture the complexity of the signal though it was observed that it is also extremely sensitive to the properties of the scattering medium and cannot recover the signal for thicker diffusers in comparison to speckle contrast.

  15. Fractal-based image processing for mine detection

    NASA Astrophysics Data System (ADS)

    Nelson, Susan R.; Tuovila, Susan M.

    1995-06-01

    A fractal-based analysis algorithm has been developed to perform the task of automated recognition of minelike targets in side scan sonar images. Because naturally occurring surfaces, such as the sea bottom, are characterized by irregular textures they are well suited to modeling as fractal surfaces. Manmade structures, including mines, are composed of Euclidean shapes, which makes fractal-based analysis highly appropriate for discrimination of mines from a natural background. To that end, a set of fractal features, including fractal dimension, was developed to classify image areas as minelike targets, nonmine areas, or clutter. Four different methods of fractal dimension calculation were compared and the Weierstrass function was used to study the effect of various signal processing procedures on the fractal qualities of an image. The difference in fractal dimension between different images depends not only on the physical features extant in the images but in the underlying statistical characteristics of the processing procedures applied to the images and the underlying mathematical assumptions of the fractal dimension calculation methods. For the image set studied, fractal-based analysis achieved a classification rate similar to human operators, and was very successful in identifying areas of clutter. The analysis technique presented here is applicable to any type of signal that may be configured as an image, making this technique suitable for multisensor systems.

  16. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  17. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm. PMID:26890900

  18. Estimating fractal dimension of medical images

    NASA Astrophysics Data System (ADS)

    Penn, Alan I.; Loew, Murray H.

    1996-04-01

    Box counting (BC) is widely used to estimate the fractal dimension (fd) of medical images on the basis of a finite set of pixel data. The fd is then used as a feature to discriminate between healthy and unhealthy conditions. We show that BC is ineffective when used on small data sets and give examples of published studies in which researchers have obtained contradictory and flawed results by using BC to estimate the fd of data-limited medical images. We present a new method for estimating fd of data-limited medical images. In the new method, fractal interpolation functions (FIFs) are used to generate self-affine models of the underlying image; each model, upon discretization, approximates the original data points. The fd of each FIF is analytically evaluated. The mean of the fds of the FIFs is the estimate of the fd of the original data. The standard deviation of the fds of the FIFs is a confidence measure of the estimate. The goodness-of-fit of the discretized models to the original data is a measure of self-affinity of the original data. In a test case, the new method generated a stable estimate of fd of a rib edge in a standard chest x-ray; box counting failed to generate a meaningful estimate of the same image.

  19. Fractal image compression: A resolution independent representation for imagery

    NASA Technical Reports Server (NTRS)

    Sloan, Alan D.

    1993-01-01

    A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.

  20. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  1. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  2. Fractal analysis: fractal dimension and lacunarity from MR images for differentiating the grades of glioma.

    PubMed

    Smitha, K A; Gupta, A K; Jayasree, R S

    2015-09-01

    Glioma, the heterogeneous tumors originating from glial cells, generally exhibit varied grades and are difficult to differentiate using conventional MR imaging techniques. When this differentiation is crucial in the disease prognosis and treatment, even the advanced MR imaging techniques fail to provide a higher discriminative power for the differentiation of malignant tumor from benign ones. A powerful image processing technique applied to the imaging techniques is expected to provide a better differentiation. The present study focuses on the fractal analysis of fluid attenuation inversion recovery MR images, for the differentiation of glioma. For this, we have considered the most important parameters of fractal analysis, fractal dimension and lacunarity. While fractal analysis assesses the malignancy and complexity of a fractal object, lacunarity gives an indication on the empty space and the degree of inhomogeneity in the fractal objects. Box counting method with the preprocessing steps namely binarization, dilation and outlining was used to obtain the fractal dimension and lacunarity in glioma. Statistical analysis such as one-way analysis of variance and receiver operating characteristic (ROC) curve analysis helped to compare the mean and to find discriminative sensitivity of the results. It was found that the lacunarity of low and high grade gliomas vary significantly. ROC curve analysis between low and high grade glioma for fractal dimension and lacunarity yielded 70.3% sensitivity and 66.7% specificity and 70.3% sensitivity and 88.9% specificity, respectively. The study observes that fractal dimension and lacunarity increases with an increase in the grade of glioma and lacunarity is helpful in identifying most malignant grades.

  3. An image retrieval system based on fractal dimension.

    PubMed

    Yao, Min; Yi, Wen-Sheng; Shen, Bin; Dai, Hong-Hua

    2003-01-01

    This paper presents a new kind of image retrieval system which obtains the feature vectors of images by estimating their fractal dimension; and at the same time establishes a tree-structure image database. After preprocessing and feature extracting, a given image is matched with the standard images in the image database using a hierarchical method of image indexing.

  4. Fast fractal image compression with triangulation wavelets

    NASA Astrophysics Data System (ADS)

    Hebert, D. J.; Soundararajan, Ezekiel

    1998-10-01

    We address the problem of improving the performance of wavelet based fractal image compression by applying efficient triangulation methods. We construct iterative function systems (IFS) in the tradition of Barnsley and Jacquin, using non-uniform triangular range and domain blocks instead of uniform rectangular ones. We search for matching domain blocks in the manner of Zhang and Chen, performing a fast wavelet transform on the blocks and eliminating low resolution mismatches to gain speed. We obtain further improvements by the efficiencies of binary triangulations (including the elimination of affine and symmetry calculations and reduced parameter storage), and by pruning the binary tree before construction of the IFS. Our wavelets are triangular Haar wavelets and `second generation' interpolation wavelets as suggested by Sweldens' recent work.

  5. Imaging through diffusive layers using speckle pattern fractal analysis and application to embedded object detection in tissues

    NASA Astrophysics Data System (ADS)

    Tremberger, George, Jr.; Flamholz, A.; Cheung, E.; Sullivan, R.; Subramaniam, R.; Schneider, P.; Brathwaite, G.; Boteju, J.; Marchese, P.; Lieberman, D.; Cheung, T.; Holden, Todd

    2007-09-01

    The absorption effect of the back surface boundary of a diffuse layer was studied via laser generated reflection speckle pattern. The spatial speckle intensity provided by a laser beam was measured. The speckle data were analyzed in terms of fractal dimension (computed by NIH ImageJ software via the box counting fractal method) and weak localization theory based on Mie scattering. Bar code imaging was modeled as binary absorption contrast and scanning resolution in millimeter range was achieved for diffusive layers up to thirty transport mean free path thick. Samples included alumina, porous glass and chicken tissue. Computer simulation was used to study the effect of speckle spatial distribution and observed fractal dimension differences were ascribed to variance controlled speckle sizes. Fractal dimension suppressions were observed in samples that had thickness dimensions around ten transport mean free path. Computer simulation suggested a maximum fractal dimension of about 2 and that subtracting information could lower fractal dimension. The fractal dimension was shown to be sensitive to sample thickness up to about fifteen transport mean free paths, and embedded objects which modified 20% or more of the effective thickness was shown to be detectable. The box counting fractal method was supplemented with the Higuchi data series fractal method and application to architectural distortion mammograms was demonstrated. The use of fractals in diffusive analysis would provide a simple language for a dialog between optics experts and mammography radiologists, facilitating the applications of laser diagnostics in tissues.

  6. Fractal dimension of cerebral surfaces using magnetic resonance images

    SciTech Connect

    Majumdar, S.; Prasad, R.R.

    1988-11-01

    The calculation of the fractal dimension of the surface bounded by the grey matter in the normal human brain using axial, sagittal, and coronal cross-sectional magnetic resonance (MR) images is presented. The fractal dimension in this case is a measure of the convolutedness of this cerebral surface. It is proposed that the fractal dimension, a feature that may be extracted from MR images, may potentially be used for image analysis, quantitative tissue characterization, and as a feature to monitor and identify cerebral abnormalities and developmental changes.

  7. Blind Detection of Region Duplication Forgery Using Fractal Coding and Feature Matching.

    PubMed

    Jenadeleh, Mohsen; Ebrahimi Moghaddam, Mohsen

    2016-05-01

    Digital image forgery detection is important because of its wide use in applications such as medical diagnosis, legal investigations, and entertainment. Copy-move forgery is one of the famous techniques, which is used in region duplication. Many of the existing copy-move detection algorithms cannot effectively blind detect duplicated regions that are made by powerful image manipulation software like Photoshop. In this study, a new method is proposed for blind detecting manipulations in digital images based on modified fractal coding and feature vector matching. The proposed method not only detects typical copy-move forgery, but also finds multiple copied forgery regions for images that are subjected to rotation, scaling, reflection, and a mixture of these postprocessing operations. The proposed method is robust against tampered images undergoing attacks such as Gaussian blurring, contrast scaling, and brightness adjustment. The experimental results demonstrated the validity and efficiency of the method.

  8. Blind Detection of Region Duplication Forgery Using Fractal Coding and Feature Matching.

    PubMed

    Jenadeleh, Mohsen; Ebrahimi Moghaddam, Mohsen

    2016-05-01

    Digital image forgery detection is important because of its wide use in applications such as medical diagnosis, legal investigations, and entertainment. Copy-move forgery is one of the famous techniques, which is used in region duplication. Many of the existing copy-move detection algorithms cannot effectively blind detect duplicated regions that are made by powerful image manipulation software like Photoshop. In this study, a new method is proposed for blind detecting manipulations in digital images based on modified fractal coding and feature vector matching. The proposed method not only detects typical copy-move forgery, but also finds multiple copied forgery regions for images that are subjected to rotation, scaling, reflection, and a mixture of these postprocessing operations. The proposed method is robust against tampered images undergoing attacks such as Gaussian blurring, contrast scaling, and brightness adjustment. The experimental results demonstrated the validity and efficiency of the method. PMID:27122398

  9. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.

    1998-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and

  10. Evaluation of Two Fractal Methods for Magnetogram Image Analysis

    NASA Technical Reports Server (NTRS)

    Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.

    1997-01-01

    Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.

  11. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  12. Wavelet and fractal analysis of ground-vehicle images

    NASA Astrophysics Data System (ADS)

    Gorsich, David J.; Tolle, Charles R.; Karlsen, Robert E.; Gerhart, Grant R.

    1996-10-01

    A large number of terrain images were taken at Aberdeen Proving Grounds, some containing ground vehicles. Is it possible to screen the images for possible targets in a short amount of time using the fractal dimension to detect texture variations. The fractal dimension is determined using the wavelet transform for these visual images. The vehicles are positioned within the grass and in different locations. Since it has been established that natural terrain exhibits a statistical l/f self-similarity property and the psychophysical perception of roughness can be quantified by the same self-similarity, fractal dimensions estimates should vary only at texture boundaries and breaks in the tree and grass patterns. Breaks in the patterns are found using contour plots of the dimension estimates and are considered as perceptual texture variations. Variation in the dimension estimate is considered more important than the accuracy of the actual dimensions number. Accurate variation estimates are found even with low resolution images.

  13. Person identification using fractal analysis of retina images

    NASA Astrophysics Data System (ADS)

    Ungureanu, Constantin; Corniencu, Felicia

    2004-10-01

    Biometric is automated method of recognizing a person based on physiological or behavior characteristics. Among the features measured are retina scan, voice, and fingerprint. A retina-based biometric involves the analysis of the blood vessels situated at the back of the eye. In this paper we present a method, which uses the fractal analysis to characterize the retina images. The Fractal Dimension (FD) of retina vessels was measured for a number of 20 images and have been obtained different values of FD for each image. This algorithm provides a good accuracy is cheap and easy to implement.

  14. Liver ultrasound image classification by using fractal dimension of edge

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita

    2012-08-01

    Medical ultrasound image edge detection is an important component in increasing the number of application of segmentation, and hence it has been subject of many studies in the literature. In this study, we have classified the liver ultrasound images (US) combining Canny and Sobel edge detectors with fractal analysis in order to provide an indicator about of the US images roughness. We intend to provide a classification rule of the focal liver lesions as: cirrhotic liver, liver hemangioma and healthy liver. For edges detection the Canny and Sobel operators were used. Fractal analyses have been applied for texture analysis and classification of focal liver lesions according to fractal dimension (FD) determined by using the Box Counting method. To assess the performance and accuracy rate of the proposed method the contrast-to-noise (CNR) is analyzed.

  15. Trabecular architecture analysis in femur radiographic images using fractals.

    PubMed

    Udhayakumar, G; Sujatha, C M; Ramakrishnan, S

    2013-04-01

    Trabecular bone is a highly complex anisotropic material that exhibits varying magnitudes of strength in compression and tension. Analysis of the trabecular architectural alteration that manifest as loss of trabecular plates and connection has been shown to yield better estimation of bone strength. In this work, an attempt has been made toward the development of an automated system for investigation of trabecular femur bone architecture using fractal analysis. Conventional radiographic femur bone images recorded using standard protocols are used in this study. The compressive and tensile regions in the images are delineated using preprocessing procedures. The delineated images are analyzed using Higuchi's fractal method to quantify pattern heterogeneity and anisotropy of trabecular bone structure. The results show that the extracted fractal features are distinct for compressive and tensile regions of normal and abnormal human femur bone. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  16. Fractal image perception provides novel insights into hierarchical cognition.

    PubMed

    Martins, M J; Fischmeister, F P; Puig-Waldmüller, E; Oh, J; Geissler, A; Robinson, S; Fitch, W T; Beisteiner, R

    2014-08-01

    Hierarchical structures play a central role in many aspects of human cognition, prominently including both language and music. In this study we addressed hierarchy in the visual domain, using a novel paradigm based on fractal images. Fractals are self-similar patterns generated by repeating the same simple rule at multiple hierarchical levels. Our hypothesis was that the brain uses different resources for processing hierarchies depending on whether it applies a "fractal" or a "non-fractal" cognitive strategy. We analyzed the neural circuits activated by these complex hierarchical patterns in an event-related fMRI study of 40 healthy subjects. Brain activation was compared across three different tasks: a similarity task, and two hierarchical tasks in which subjects were asked to recognize the repetition of a rule operating transformations either within an existing hierarchical level, or generating new hierarchical levels. Similar hierarchical images were generated by both rules and target images were identical. We found that when processing visual hierarchies, engagement in both hierarchical tasks activated the visual dorsal stream (occipito-parietal cortex, intraparietal sulcus and dorsolateral prefrontal cortex). In addition, the level-generating task specifically activated circuits related to the integration of spatial and categorical information, and with the integration of items in contexts (posterior cingulate cortex, retrosplenial cortex, and medial, ventral and anterior regions of temporal cortex). These findings provide interesting new clues about the cognitive mechanisms involved in the generation of new hierarchical levels as required for fractals.

  17. Fractal image analysis - Application to the topography of Oregon and synthetic images.

    NASA Technical Reports Server (NTRS)

    Huang, Jie; Turcotte, Donald L.

    1990-01-01

    Digitized topography for the state of Oregon has been used to obtain maps of fractal dimension and roughness amplitude. The roughness amplitude correlates well with variations in relief and is a promising parameter for the quantitative classification of landforms. The spatial variations in fractal dimension are low and show no clear correlation with different tectonic settings. For Oregon the mean fractal dimension from a two-dimensional spectral analysis is D = 2.586, and for a one-dimensional spectral analysis the mean fractal dimension is D = 1.487, which is close to the Brown noise value D = 1.5. Synthetic two-dimensional images have also been generated for a range of D values. For D = 2.6, the synthetic image has a mean one-dimensional spectral fractal dimension D = 1.58, which is consistent with the results for Oregon. This approach can be easily applied to any digitzed image that obeys fractal statistics.

  18. Confocal coded aperture imaging

    DOEpatents

    Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.

    2001-01-01

    A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.

  19. Coded source neutron imaging

    SciTech Connect

    Bingham, Philip R; Santos-Villalobos, Hector J

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  20. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Quattrochi, Dale A.; Luvall, Jeffrey C.

    1997-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely scale-independent. Self-similarity is a property of curves or surfaces where each part is indistinguishable from the whole. The fractal dimension D of remote sensing data yields quantitative insight on the spatial complexity and information content contained within these data. Analyses of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed(l0 to 80 meters). The forested scene behaves as one would expect-larger pixel sizes decrease the complexity of the image as individual clumps of trees are averaged into larger blocks. The increased complexity of the agricultural image with increasing pixel size results from the loss of homogeneous groups of pixels in the large fields to mixed pixels composed of varying combinations of NDVI values that correspond to roads and vegetation. The same process occur's in the urban image to some extent, but the lack of large, homogeneous areas in the high resolution NDVI image means the initially high D value is maintained as pixel size increases. The slope of the fractal dimension-resolution relationship provides indications of how image classification or feature identification will be affected by changes in sensor resolution.

  1. Using fractal compression scheme to embed a digital signature into an image

    NASA Astrophysics Data System (ADS)

    Puate, Joan; Jordan, Frederic D.

    1997-01-01

    With the increase in the number of digital networks and recording devices, digital images appear to be a material, especially still images, whose ownership is widely threatened due to the availability of simple, rapid and perfect duplication and distribution means. It is in this context that several European projects are devoted to finding a technical solution which, as it applies to still images, introduces a code or watermark into the image data itself. This watermark should not only allow one to determine the owner of the image, but also respect its quality and be difficult to revoke. An additional requirement is that the code should be retrievable by the only mean of the protected information. In this paper, we propose a new scheme based on fractal coding and decoding. In general terms, a fractal coder exploits the spatial redundancy within the image by establishing a relationship between its different parts. We describe a way to use this relationship as a means of embedding a watermark. Tests have been performed in order to measure the robustness of the technique against JPEG conversion and low pass filtering. In both cases, very promising results have been obtained.

  2. FIRE: fractal indexing with robust extensions for image databases.

    PubMed

    Distasi, Riccardo; Nappi, Michele; Tucci, Maurizio

    2003-01-01

    As already documented in the literature, fractal image encoding is a family of techniques that achieves a good compromise between compression and perceived quality by exploiting the self-similarities present in an image. Furthermore, because of its compactness and stability, the fractal approach can be used to produce a unique signature, thus obtaining a practical image indexing system. Since fractal-based indexing systems are able to deal with the images in compressed form, they are suitable for use with large databases. We propose a system called FIRE, which is then proven to be invariant under three classes of pixel intensity transformations and under geometrical isometries such as rotations by multiples of /spl pi//2 and reflections. This property makes the system robust with respect to a large class of image transformations that can happen in practical applications: the images can be retrieved even in the presence of illumination and/or color alterations. Additionally, the experimental results show the effectiveness of FIRE in terms of both compression and retrieval accuracy.

  3. Fractal descriptors for discrimination of microscopy images of plant leaves

    NASA Astrophysics Data System (ADS)

    Silva, N. R.; Florindo, J. B.; Gómez, M. C.; Kolb, R. M.; Bruno, O. M.

    2014-03-01

    This study proposes the application of fractal descriptors method to the discrimination of microscopy images of plant leaves. Fractal descriptors have demonstrated to be a powerful discriminative method in image analysis, mainly for the discrimination of natural objects. In fact, these descriptors express the spatial arrangement of pixels inside the texture under different scales and such arrangements are directly related to physical properties inherent to the material depicted in the image. Here, we employ the Bouligand-Minkowski descriptors. These are obtained by the dilation of a surface mapping the gray-level texture. The classification of the microscopy images is performed by the well-known Support Vector Machine (SVM) method and we compare the success rate with other literature texture analysis methods. The proposed method achieved a correctness rate of 89%, while the second best solution, the Co-occurrence descriptors, yielded only 78%. This clear advantage of fractal descriptors demonstrates the potential of such approach in the analysis of the plant microscopy images.

  4. Segmentation of magnetic resonance image using fractal dimension

    NASA Astrophysics Data System (ADS)

    Yau, Joseph K. K.; Wong, Sau-hoi; Chan, Kwok-Leung

    1997-04-01

    In recent years, much research has been conducted in the three-dimensional visualization of medical image. This requires a good segmentation technique. Many early works use first-order and second-order statistics. First-order statistical parameters can be calculated quickly but their effectiveness is influenced by many factors such as illumination, contrast and random noise of the image. Second-order statistical parameters, such as spatial gray level co-occurrence matrices statistics, take longer time to compute but can extract the textural information. In this investigating, two different parameters, namely the entropy and the fractal dimension, are employed to perform segmentation of the magnetic resonance images of the head of a male cadaver. The entropy is calculated from the spatial gray level co-occurrence matrices. The fractal dimension is calculated by the reticular cell counting method. Several regions of the human head are chosen for analysis. They are the bone, gyrus and lobe. Results show that the parameters are able to segment different types of tissue. The entropy gives very good result but it requires very long computation time and large amount of memory. The performance of the fractal dimension is comparable with the entropy. It is simple to estimate and demands lesser memory space.

  5. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  6. A simple method for estimating the fractal dimension from digital images: The compression dimension

    NASA Astrophysics Data System (ADS)

    Chamorro-Posada, Pedro

    2016-10-01

    The fractal structure of real world objects is often analyzed using digital images. In this context, the compression fractal dimension is put forward. It provides a simple method for the direct estimation of the dimension of fractals stored as digital image files. The computational scheme can be implemented using readily available free software. Its simplicity also makes it very interesting for introductory elaborations of basic concepts of fractal geometry, complexity, and information theory. A test of the computational scheme using limited-quality images of well-defined fractal sets obtained from the Internet and free software has been performed. Also, a systematic evaluation of the proposed method using computer generated images of the Weierstrass cosine function shows an accuracy comparable to those of the methods most commonly used to estimate the dimension of fractal data sequences applied to the same test problem.

  7. Aliasing removing of hyperspectral image based on fractal structure matching

    NASA Astrophysics Data System (ADS)

    Wei, Ran; Zhang, Ye; Zhang, Junping

    2015-05-01

    Due to the richness on high frequency components, hyperspectral image (HSI) is more sensitive to distortion like aliasing. Many methods aiming at removing such distortion have been proposed. However, seldom of them are suitable to HSI, due to low spatial resolution characteristic of HSI. Fortunately, HSI contains plentiful spectral information, which can be exploited to overcome such difficulties. Motivated by this, we proposed an aliasing removing method for HSI. The major differences between proposed and current methods is that proposed algorithm is able to utilize fractal structure information, thus the dilemma originated from low-resolution of HSI is solved. Experiments on real HSI data demonstrated subjectively and objectively that proposed method can not only remove annoying visual effect brought by aliasing, but also recover more high frequency component.

  8. Fractal analysis of AFM images of the surface of Bowman's membrane of the human cornea.

    PubMed

    Ţălu, Ştefan; Stach, Sebastian; Sueiras, Vivian; Ziebarth, Noël Marysa

    2015-04-01

    The objective of this study is to further investigate the ultrastructural details of the surface of Bowman's membrane of the human cornea, using atomic force microscopy (AFM) images. One representative image acquired of Bowman's membrane of a human cornea was investigated. The three-dimensional (3-D) surface of the sample was imaged using AFM in contact mode, while the sample was completely submerged in optisol solution. Height and deflection images were acquired at multiple scan lengths using the MFP-3D AFM system software (Asylum Research, Santa Barbara, CA), based in IGOR Pro (WaveMetrics, Lake Oswego, OR). A novel approach, based on computational algorithms for fractal analysis of surfaces applied for AFM data, was utilized to analyze the surface structure. The surfaces revealed a fractal structure at the nanometer scale. The fractal dimension, D, provided quantitative values that characterize the scale properties of surface geometry. Detailed characterization of the surface topography was obtained using statistical parameters, in accordance with ISO 25178-2: 2012. Results obtained by fractal analysis confirm the relationship between the value of the fractal dimension and the statistical surface roughness parameters. The surface structure of Bowman's membrane of the human cornea is complex. The analyzed AFM images confirm a fractal nature of the surface, which is not taken into account by classical surface statistical parameters. Surface fractal dimension could be useful in ophthalmology to quantify corneal architectural changes associated with different disease states to further our understanding of disease evolution.

  9. Fractal analysis of scatter imaging signatures to distinguish breast pathologies

    NASA Astrophysics Data System (ADS)

    Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.

    2013-02-01

    Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.

  10. Analysis of fractal dimensions of rat bones from film and digital images

    NASA Technical Reports Server (NTRS)

    Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Yamauchi, M.

    2001-01-01

    OBJECTIVES: (1) To compare the effect of two different intra-oral image receptors on estimates of fractal dimension; and (2) to determine the variations in fractal dimensions between the femur, tibia and humerus of the rat and between their proximal, middle and distal regions. METHODS: The left femur, tibia and humerus from 24 4-6-month-old Sprague-Dawley rats were radiographed using intra-oral film and a charge-coupled device (CCD). Films were digitized at a pixel density comparable to the CCD using a flat-bed scanner. Square regions of interest were selected from proximal, middle, and distal regions of each bone. Fractal dimensions were estimated from the slope of regression lines fitted to plots of log power against log spatial frequency. RESULTS: The fractal dimensions estimates from digitized films were significantly greater than those produced from the CCD (P=0.0008). Estimated fractal dimensions of three types of bone were not significantly different (P=0.0544); however, the three regions of bones were significantly different (P=0.0239). The fractal dimensions estimated from radiographs of the proximal and distal regions of the bones were lower than comparable estimates obtained from the middle region. CONCLUSIONS: Different types of image receptors significantly affect estimates of fractal dimension. There was no difference in the fractal dimensions of the different bones but the three regions differed significantly.

  11. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  12. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight (CXIDB ID 16)

    DOE Data Explorer

    Loh, N. Duane

    2012-06-20

    This deposition includes the aerosol diffraction images used for phasing, fractal morphology, and time-of-flight mass spectrometry. Files in this deposition are ordered in subdirectories that reflect the specifics.

  13. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  14. Hurst exponent for fractal characterization of LANDSAT images

    NASA Astrophysics Data System (ADS)

    Valdiviezo-N., Juan C.; Castro, Raul; Cristóbal, Gabriel; Carbone, Anna

    2014-10-01

    In this research the Hurst exponent H is used for quantifying the fractal features of LANDSAT images. The Hurst exponent is estimated by means of the Detrending Moving Average (DMA), an algorithm based on a generalized high-dimensional variance around a moving average low-pass filter. Hence, for a two-dimensional signal, the algorithm first generates an average response for different subarrays by varying the size of the moving low-pass filter. For each subarray the corresponding variance value is calculated by the difference between the original and the averaged signals. The value of the variance obtained at each subarray is then plotted on log-log axes, with the slope of the regression line corresponding to the Hurst exponent. The application of the algorithm to a set of LANDSAT imagery has allowed us to estimate the Hurst exponent of specific areas on Earth surface at subsequent time instances. According to the presented results, the value of the Hurst exponent is directly related to the changes in land use, showing a decreasing value when the area under study has been modified by natural processes or human intervention. Interestingly, natural areas presenting a gradual growth of man made activities or an increasing degree of pollution have a considerable reduction in their corresponding Hurst exponent.

  15. Image Sequence Coding by Octrees

    NASA Astrophysics Data System (ADS)

    Leonardi, Riccardo

    1989-11-01

    This work addresses the problem of representing an image sequence as a set of octrees. The purpose is to generate a flexible data structure to model video signals, for applications such as motion estimation, video coding and/or analysis. An image sequence can be represented as a 3-dimensional causal signal, which becomes a 3 dimensional array of data when the signal has been digitized. If it is desirable to track long-term spatio-temporal correlation, a series of octree structures may be embedded on this 3D array. Each octree looks at a subset of data in the spatio-temporal space. At the lowest level (leaves of the octree), adjacent pixels of neighboring frames are captured. A combination of these is represented at the parent level of each group of 8 children. This combination may result in a more compact representation of the information of these pixels (coding application) or in a local estimate of some feature of interest (e.g., velocity, classification, object boundary). This combination can be iterated bottom-up to get a hierarchical description of the image sequence characteristics. A coding strategy using such data structure involves the description of the octree shape using one bit per node except for leaves of the tree located at the lowest level, and the value (or parametric model) assigned to each one of these leaves. Experiments have been performed to represent Common Image Format (CIF) sequences.

  16. On the fractal distribution of primes and prime-indexed primes by the binary image analysis

    NASA Astrophysics Data System (ADS)

    Cattani, Carlo; Ciancio, Armando

    2016-10-01

    In this paper, the distribution of primes and prime-indexed primes (PIPs) is studied by mapping primes into a binary image which visualizes the distribution of primes. These images show that the distribution of primes (and PIPs) is similar to a Cantor dust, moreover the self-similarity with respect to the order of PIPs (already proven in Batchko (2014)) can be seen as an invariance of the binary images. The index of primes plays the same role of the scale for fractals, so that with respect to the index the distribution of prime-indexed primes is characterized by the self-similarity alike any other fractal. In particular, in order to single out the scale dependence, the PIPs fractal distribution will be evaluated by limiting to two parameters, fractal dimension (δ) and lacunarity (λ), that are usually used to measure the fractal nature. Because of the invariance of the corresponding binary plots, the fractal dimension and lacunarity of primes distribution are invariant with respect to the index of PIPs.

  17. Image characterization by fractal descriptors in variational mode decomposition domain: Application to brain magnetic resonance

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-08-01

    The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.

  18. Human discrimination of surface slant in fractal and related textured images.

    PubMed

    Passmore, P J; Johnston, A

    1995-01-01

    Slant-discrimination thresholds were measured for textures with the property that their power spectra, when log transformed, are inversely proportional to log spatial frequency: P(f) alpha f-beta. As the exponent beta changes from high values to low values, the slope of the power spectrum of the image decreases. As the parameter passes through values in the fractal range the resulting texture changes from having the appearance of a cloud-like surface through to a granite-like surface. Exponents below the fractal range produce textures that converge towards the appearance of a random grey-level noise pattern. Since fractal patterns are self-similar at a range of scales, one might think it would be difficult to recover changes in depth in fractal images; however, slant-discrimination thresholds did not differ substantially as a function of the slope of the power spectrum. Reducing the size of the viewing aperture increased thresholds significantly, suggesting that slant discrimination benefits from a global analysis. The effect of texture regularity on perceived slant was investigated using bandpassed fractal textures. As the bandwidth of a bandpass filter is reduced, the bandpassed texture was perceived to be increasingly more slanted than its fractal counterpart.

  19. Edge detection and image segmentation of space scenes using fractal analyses

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Fuller, J. J.

    1992-01-01

    A method was developed for segmenting images of space scenes into manmade and natural components, using fractal dimensions and lacunarities. Calculations of these parameters are presented. Results are presented for a variety of aerospace images, showing that it is possible to perform edge detections of manmade objects against natural background such as those seen in an aerospace environment.

  20. A Comparison of Local Variance, Fractal Dimension, and Moran's I as Aids to Multispectral Image Classification

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.

    2004-01-01

    The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.

  1. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  2. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    PubMed Central

    de Castro, J.; Méndez, A.; Tarquis, A. M.

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  3. Fractal analysis of laplacian pyramidal filters applied to segmentation of soil images.

    PubMed

    de Castro, J; Ballesteros, F; Méndez, A; Tarquis, A M

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  4. Chaos and Fractals.

    ERIC Educational Resources Information Center

    Barton, Ray

    1990-01-01

    Presented is an educational game called "The Chaos Game" which produces complicated fractal images. Two basic computer programs are included. The production of fractal images by the Sierpinski gasket and the Chaos Game programs is discussed. (CW)

  5. Buried mine detection using fractal geometry analysis to the LWIR successive line scan data image

    NASA Astrophysics Data System (ADS)

    Araki, Kan

    2012-06-01

    We have engaged in research on buried mine/IED detection by remote sensing method using LWIR camera. A IR image of a ground, containing buried objects can be assumed as a superimposed pattern including thermal scattering which may depend on the ground surface roughness, vegetation canopy, and effect of the sun light, and radiation due to various heat interaction caused by differences in specific heat, size, and buried depth of the objects and local temperature of their surrounding environment. In this cumbersome environment, we introduce fractal geometry for analyzing from an IR image. Clutter patterns due to these complex elements have oftentimes low ordered fractal dimension of Hausdorff Dimension. On the other hand, the target patterns have its tendency of obtaining higher ordered fractal dimension in terms of Information Dimension. Random Shuffle Surrogate method or Fourier Transform Surrogate method is used to evaluate fractional statistics by applying shuffle of time sequence data or phase of spectrum. Fractal interpolation to each line scan was also applied to improve the signal processing performance in order to evade zero division and enhance information of data. Some results of target extraction by using relationship between low and high ordered fractal dimension are to be presented.

  6. Fractal mapping of digitized images - Application to the topography of Arizona and comparisons with synthetic images

    NASA Technical Reports Server (NTRS)

    Huang, J.; Turcotte, D. L.

    1989-01-01

    The concept of fractal mapping is introduced and applied to digitized topography of Arizona. It is shown that the fractal statistics satisfy the topography of the state to a good approximation. The fractal dimensions and roughness amplitudes from subregions are used to construct maps of these quantities. It is found that the fractal dimension of actual two-dimensional topography is not affected by the adding unity to the fractal dimension of one-dimensional topographic tracks. In addition, consideration is given to the production of fractal maps from synthetically derived topography.

  7. [Influence of image process on fractal morphology characterization of NAPLs vertical fingering flow].

    PubMed

    Li, Hui-Ying; Du, Xiao-Ming; Yang, Bin; Wu, Bin; Xu, Zhu; Shi, Yi; Fang, Ji-Dun; Li, Fa-Sheng

    2013-11-01

    Dyes are frequently used to visualize fingering flow pathways, where the image process has an important role in the result analysis. The theory of fractal geometry is applied to give quantitative description of the stain patterns via image analysis, which is helpful for finger characterization and prediction. This description typically involves two parameters, a mass fractal dimension (D(m)) relative to the area, and a surface fractal dimension (D(s)) relative to the perimeter. This work detailed analyzes the influence of various choices during the thresholding step that transformed the origin color images to binary ones which are needed in the fractal analysis. One hundred and thirty images were obtained from laboratory two-dimension sand box infiltration experiments of four dyed non-aqueous phase liquids. Detailed comparisons of D(m) and D(s) were made respectively, considering a set of threshold algorithms and the filling of lakes. Results indicate that adjustments of the saturation threshold influence are less on both D(m) and D(s) in the laboratory experiments. The brightness threshold adjustments decrease the D(m) by 0.02 and increase the D(s) by 0.05. Filling lakes influence the D(m) less while the D(s) decrease by 0.10. Therefore the D(m) was recommended for further analysis to avoid subjective choices' influence in the image process.

  8. Correlated Percolation, Fractal Structures, and Scale-Invariant Distribution of Clusters in Natural Images

    PubMed Central

    Saremi, Saeed; Sejnowski, Terrence J.

    2016-01-01

    Natural images are scale invariant with structures at all length scales. We formulated a geometric view of scale invariance in natural images using percolation theory, which describes the behavior of connected clusters on graphs. We map images to the percolation model by defining clusters on a binary representation for images. We show that critical percolating structures emerge in natural images and study their scaling properties by identifying fractal dimensions and exponents for the scale-invariant distributions of clusters. This formulation leads to a method for identifying clusters in images from underlying structures as a starting point for image segmentation. PMID:26415153

  9. Coded aperture compressive temporal imaging.

    PubMed

    Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J

    2013-05-01

    We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.

  10. Hyper-Fractal Analysis v04: Implementation of a fuzzy box-counting algorithm for image analysis of artistic works

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; El-Shamali, S. A.

    2013-07-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 4D objects (Grossu et al. 2013 [1]). Following our attempt of investigating artistic works by fractal analysis of craquelure, we encountered important difficulties in filtering real information from noise. In this context, trying to avoid a sharp delimitation of "black" and "white" pixels, we implemented a fuzzy box-counting algorithm. Hyper-Fractal Analysis v04 example of use. Fuzzy fractal dimension of painting craquelure. Running time: In a first approximation, the algorithm is linear [2].

  11. Fractal methods for extracting artificial objects from the unmanned aerial vehicle images

    NASA Astrophysics Data System (ADS)

    Markov, Eugene

    2016-04-01

    Unmanned aerial vehicles (UAVs) have become used increasingly in earth surface observations, with a special interest put into automatic modes of environmental control and recognition of artificial objects. Fractal methods for image processing well detect the artificial objects in digital space images but were not applied previously to the UAV-produced imagery. Parameters of photography, on-board equipment, and image characteristics differ considerably for spacecrafts and UAVs. Therefore, methods that work properly with space images can produce different results for the UAVs. In this regard, testing the applicability of fractal methods for the UAV-produced images and determining the optimal range of parameters for these methods represent great interest. This research is dedicated to the solution of this problem. Specific features of the earth's surface images produced with UAVs are described in the context of their interpretation and recognition. Fractal image processing methods for extracting artificial objects are described. The results of applying these methods to the UAV images are presented.

  12. Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors

    PubMed Central

    da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2015-01-01

    The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification. PMID:26091501

  13. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, D.; Kwatra, S. C.

    1992-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.

  14. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.

  15. Detecting abrupt dynamic change based on changes in the fractal properties of spatial images

    NASA Astrophysics Data System (ADS)

    Liu, Qunqun; He, Wenping; Gu, Bin; Jiang, Yundi

    2016-08-01

    Many abrupt climate change events often cannot be detected timely by conventional abrupt detection methods until a few years after these events have occurred. The reason for this lag in detection is that abundant and long-term observational data are required for accurate abrupt change detection by these methods, especially for the detection of a regime shift. So, these methods cannot help us understand and forecast the evolution of the climate system in a timely manner. Obviously, spatial images, generated by a coupled spatiotemporal dynamical model, contain more information about a dynamic system than a single time series, and we find that spatial images show the fractal properties. The fractal properties of spatial images can be quantitatively characterized by the Hurst exponent, which can be estimated by two-dimensional detrended fluctuation analysis (TD-DFA). Based on this, TD-DFA is used to detect an abrupt dynamic change of a coupled spatiotemporal model. The results show that the TD-DFA method can effectively detect abrupt parameter changes in the coupled model by monitoring the changing in the fractal properties of spatial images. The present method provides a new way for abrupt dynamic change detection, which can achieve timely and efficient abrupt change detection results.

  16. Asymmetries in the direction of saccades during perception of scenes and fractals: effects of image type and image features.

    PubMed

    Foulsham, Tom; Kingstone, Alan

    2010-04-01

    The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance.

  17. Alzheimer's Disease Detection in Brain Magnetic Resonance Images Using Multiscale Fractal Analysis

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    We present a new automated system for the detection of brain magnetic resonance images (MRI) affected by Alzheimer's disease (AD). The MRI is analyzed by means of multiscale analysis (MSA) to obtain its fractals at six different scales. The extracted fractals are used as features to differentiate healthy brain MRI from those of AD by a support vector machine (SVM) classifier. The result of classifying 93 brain MRIs consisting of 51 images of healthy brains and 42 of brains affected by AD, using leave-one-out cross-validation method, yielded 99.18% ± 0.01 classification accuracy, 100% sensitivity, and 98.20% ± 0.02 specificity. These results and a processing time of 5.64 seconds indicate that the proposed approach may be an efficient diagnostic aid for radiologists in the screening for AD. PMID:24967286

  18. The influence of edge detection algorithms on the estimation of the fractal dimension of binary digital images

    NASA Astrophysics Data System (ADS)

    Ahammer, Helmut; DeVaney, Trevor T. J.

    2004-03-01

    The boundary of a fractal object, represented in a two-dimensional space, is theoretically a line with an infinitely small width. In digital images this boundary or contour is limited to the pixel resolution of the image and the width of the line commonly depends on the edge detection algorithm used. The Minkowski dimension was evaluated by using three different edge detection algorithms (Sobel, Roberts, and Laplace operator). These three operators were investigated because they are very widely used and because their edge detection result is very distinct concerning the line width. Very common fractals (Sierpinski carpet and Koch islands) were investigated as well as the binary images from a cancer invasion assay taken with a confocal laser scanning microscope. The fractal dimension is directly proportional to the width of the contour line and the fact, that in practice very often the investigated objects are fractals only within a limited resolution range is considered too.

  19. Advanced Imaging Optics Utilizing Wavefront Coding.

    SciTech Connect

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  20. An edge preserving differential image coding scheme

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1992-01-01

    Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.

  1. An edge preserving differential image coding scheme

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility especially when coding medical or scientific images, where edge preservation is of utmost importance. We present a simple, easy to implement differential image coding system with excellent edge preservation properties. The coding system can be used over variable rate channels which makes it especially attractive for use in the packet network environment.

  2. A fractal derivative model for the characterization of anomalous diffusion in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.

    2016-10-01

    Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the

  3. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible.

  4. Coding and transmission of subband coded images on the Internet

    NASA Astrophysics Data System (ADS)

    Wah, Benjamin W.; Su, Xiao

    2001-09-01

    Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.

  5. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  6. Zone Specific Fractal Dimension of Retinal Images as Predictor of Stroke Incidence

    PubMed Central

    Kumar, Dinesh Kant; Hao, Hao; Unnikrishnan, Premith; Kawasaki, Ryo; Mitchell, Paul

    2014-01-01

    Fractal dimensions (FDs) are frequently used for summarizing the complexity of retinal vascular. However, previous techniques on this topic were not zone specific. A new methodology to measure FD of a specific zone in retinal images has been developed and tested as a marker for stroke prediction. Higuchi's fractal dimension was measured in circumferential direction (FDC) with respect to optic disk (OD), in three concentric regions between OD boundary and 1.5 OD diameter from its margin. The significance of its association with future episode of stroke event was tested using the Blue Mountain Eye Study (BMES) database and compared against spectrum fractal dimension (SFD) and box-counting (BC) dimension. Kruskal-Wallis analysis revealed FDC as a better predictor of stroke (H = 5.80, P = 0.016, α = 0.05) compared with SFD (H = 0.51, P = 0.475, α = 0.05) and BC (H = 0.41, P = 0.520, α = 0.05) with overall lower median value for the cases compared to the control group. This work has shown that there is a significant association between zone specific FDC of eye fundus images with future episode of stroke while this difference is not significant when other FD methods are employed. PMID:25485298

  7. Maximum constrained sparse coding for image representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Zhao, Danpei; Jiang, Zhiguo

    2015-12-01

    Sparse coding exhibits good performance in many computer vision applications by finding bases which capture highlevel semantics of the data and learning sparse coefficients in terms of the bases. However, due to the fact that bases are non-orthogonal, sparse coding can hardly preserve the samples' similarity, which is important for discrimination. In this paper, a new image representing method called maximum constrained sparse coding (MCSC) is proposed. Sparse representation with more active coefficients means more similarity information, and the infinite norm is added to the solution for this purpose. We solve the optimizer by constraining the codes' maximum and releasing the residual to other dictionary atoms. Experimental results on image clustering show that our method can preserve the similarity of adjacent samples and maintain the sparsity of code simultaneously.

  8. Adaptive predictive image coding using local characteristics

    NASA Astrophysics Data System (ADS)

    Hsieh, C. H.; Lu, P. C.; Liou, W. G.

    1989-12-01

    The paper presents an efficient adaptive predictive coding method using the local characteristics of images. In this method, three coding schemes, namely, mean, subsampling combined with fixed DPCM, and ADPCM/PCM, are used and one of these is chosen adaptively based on the local characteristics of images. The prediction parameters of the two-dimensional linear predictor in the ADPCM/PCM are extracted on a block by block basis. Simulation results show that the proposed method is effective in reducing the slope overload distortion and the granular noise at low bit rates, and thus it can improve the visual quality of reconstructed images.

  9. Unification of two fractal families

    NASA Astrophysics Data System (ADS)

    Liu, Ying

    1995-06-01

    Barnsley and Hurd classify the fractal images into two families: iterated function system fractals (IFS fractals) and fractal transform fractals, or local iterated function system fractals (LIFS fractals). We will call IFS fractals, class 2 fractals and LIFS fractals, class 3 fractals. In this paper, we will unify these two approaches plus another family of fractals, the class 5 fractals. The basic idea is given as follows: a dynamical system can be represented by a digraph, the nodes in a digraph can be divided into two parts: transient states and persistent states. For bilevel images, a persistent node is a black pixel. A transient node is a white pixel. For images with more than two gray levels, a stochastic digraph is used. A transient node is a pixel with the intensity of 0. The intensity of a persistent node is determined by a relative frequency. In this way, the two families of fractals can be generated in a similar way. In this paper, we will first present a classification of dynamical systems and introduce the transformation based on digraphs, then we will unify the two approaches for fractal binary images. We will compare the decoding algorithms of the two families. Finally, we will generalize the discussion to continuous-tone images.

  10. Fractal lacunarity of trabecular bone and magnetic resonance imaging: New perspectives for osteoporotic fracture risk assessment

    PubMed Central

    Zaia, Annamaria

    2015-01-01

    Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest

  11. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three

  12. Scaling properties and fractality in the distribution of coding segments in eukaryotic genomes revealed through a block entropy approach

    NASA Astrophysics Data System (ADS)

    Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis

    2010-11-01

    Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in

  13. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  14. Classification of vertebral compression fractures in magnetic resonance images using spectral and fractal analysis.

    PubMed

    Azevedo-Marques, P M; Spagnoli, H F; Frighetto-Pereira, L; Menezes-Reis, R; Metzner, G A; Rangayyan, R M; Nogueira-Barbosa, M H

    2015-08-01

    Fractures with partial collapse of vertebral bodies are generically referred to as "vertebral compression fractures" or VCFs. VCFs can have different etiologies comprising trauma, bone failure related to osteoporosis, or metastatic cancer affecting bone. VCFs related to osteoporosis (benign fractures) and to cancer (malignant fractures) are commonly found in the elderly population. In the clinical setting, the differentiation between benign and malignant fractures is complex and difficult. This paper presents a study aimed at developing a system for computer-aided diagnosis to help in the differentiation between malignant and benign VCFs in magnetic resonance imaging (MRI). We used T1-weighted MRI of the lumbar spine in the sagittal plane. Images from 47 consecutive patients (31 women, 16 men, mean age 63 years) were studied, including 19 malignant fractures and 54 benign fractures. Spectral and fractal features were extracted from manually segmented images of 73 vertebral bodies with VCFs. The classification of malignant vs. benign VCFs was performed using the k-nearest neighbor classifier with the Euclidean distance. Results obtained show that combinations of features derived from Fourier and wavelet transforms, together with the fractal dimension, were able to obtain correct classification rate up to 94.7% with area under the receiver operating characteristic curve up to 0.95. PMID:26736364

  15. Multichannel linear predictive coding of color images

    NASA Astrophysics Data System (ADS)

    Maragos, P. A.; Mersereau, R. M.; Schafer, R. W.

    This paper reports on a preliminary study of applying single-channel (scalar) and multichannel (vector) 2-D linear prediction to color image modeling and coding. Also, the novel idea of a multi-input single-output 2-D ADPCM coder is introduced. The results of this study indicate that texture information in multispectral images can be represented by linear prediction coefficients or matrices, whereas the prediction error conveys edge-information. Moreover, by using a single-channel edge-information the investigators obtained, from original color images of 24 bits/pixel, reconstructed images of good quality at information rates of 1 bit/pixel or less.

  16. Stereo image coding: a projection approach.

    PubMed

    Aydinoğlu, H; Hayes, M H

    1998-01-01

    Recently, due to advances in display technology, three-dimensional (3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pair, data compression algorithms should be employed to represent stereo pairs efficiently. This paper focuses on the stereo image coding problem. We begin with a description of the problem and a survey of current stereo coding techniques. A new stereo image coding algorithm that is based on disparity compensation and subspace projection is described. This algorithm, the subspace projection technique (SPT), is a transform domain approach with a space-varying transformation matrix and may be interpreted as a spatial-transform domain representation of the stereo data. The advantage of the proposed approach is that it can locally adapt to the changes in the cross-correlation characteristics of the stereo pairs. Several design issues and implementations of the algorithm are discussed. Finally, we present empirical results suggesting that the SPT approach outperforms current stereo coding techniques. PMID:18276269

  17. Evaluation of super-resolution imager with binary fractal test target

    NASA Astrophysics Data System (ADS)

    Landeau, Stéphane

    2014-10-01

    Today, new generation of powerful non-linear image processing are used for real time super-resolution or noise reduction. Optronic imagers with such features are becoming difficult to assess, because spatial resolution and sensitivity are now related to scene content. Many algorithms include regularization process, which usually reduces image complexity to enhance spread edges or contours. Small important scene details can be then deleted by this kind of processing. In this paper, a binary fractal test target is presented, with a structured clutter pattern and an interesting autosimilarity multi-scale property. The apparent structured clutter of this test target gives a trade-off between a white noise, unlikely in real scenes, and very structured targets like MTF targets. Together with the fractal design of the target, an assessment method has been developed to evaluate automatically the non-linear effects on the acquired and processed image of the imager. The calculated figure of merit is to be directly comparable to the linear Fourier MTF. For this purpose the Haar wavelet elements distributed spatially and at different scales on the target are assimilated to the sine Fourier cycles at different frequencies. The probability of correct resolution indicates the ability to read correct Haar contrast among all Haar wavelet elements with a position constraint. For the method validation, a simulation of two different imager types has been done, a well-sampled linear system and an under-sampled one, coupled with super-resolution or noise reduction algorithms. The influence of the target contrast on the figures of merit is analyzed. Finally, the possible introduction of this new figure of merit in existing analytical range performance models, such as TRM4 (Fraunhofer IOSB) or NVIPM (NVESD) is discussed. Benefits and limitations of the method are also compared to the TOD (TNO) evaluation method.

  18. Coded Access Optical Sensor (CAOS) Imager

    NASA Astrophysics Data System (ADS)

    Riza, N. A.; Amin, M. J.; La Torre, J. P.

    2015-04-01

    High spatial resolution, low inter-pixel crosstalk, high signal-to-noise ratio (SNR), adequate application dependent speed, economical and energy efficient design are common goals sought after for optical image sensors. In optical microscopy, overcoming the diffraction limit in spatial resolution has been achieved using materials chemistry, optimal wavelengths, precision optics and nanomotion-mechanics for pixel-by-pixel scanning. Imagers based on pixelated imaging devices such as CCD/CMOS sensors avoid pixel-by-pixel scanning as all sensor pixels operate in parallel, but these imagers are fundamentally limited by inter-pixel crosstalk, in particular with interspersed bright and dim light zones. In this paper, we propose an agile pixel imager sensor design platform called Coded Access Optical Sensor (CAOS) that can greatly alleviate the mentioned fundamental limitations, empowering smart optical imaging for particular environments. Specifically, this novel CAOS imager engages an application dependent electronically programmable agile pixel platform using hybrid space-time-frequency coded multiple-access of the sampled optical irradiance map. We demonstrate the foundational working principles of the first experimental electronically programmable CAOS imager using hybrid time-frequency multiple access sampling of a known high contrast laser beam irradiance test map, with the CAOS instrument based on a Texas Instruments (TI) Digital Micromirror Device (DMD). This CAOS instrument provides imaging data that exhibits 77 dB electrical SNR and the measured laser beam image irradiance specifications closely match (i.e., within 0.75% error) the laser manufacturer provided beam image irradiance radius numbers. The proposed CAOS imager can be deployed in many scientific and non-scientific applications where pixel agility via electronic programmability can pull out desired features in an irradiance map subject to the CAOS imaging operation.

  19. Multitemporal and Multiscaled Fractal Analysis of Landsat Satellite Data Using the Image Characterization and Modeling System (ICAMS)

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.

    1997-01-01

    The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.

  20. Typhoon center location algorithm based on fractal feature and gradient of infrared satellite cloud image

    NASA Astrophysics Data System (ADS)

    Zhang, Changjiang; Chen, Yuan; Lu, Juan

    2014-11-01

    An efficient algorithm for typhoon center location is proposed using fractal feature and gradient of infrared satellite cloud image. The centers are generally located in this region for a typhoon except the latter disappearing typhoon. The characteristics of dense cloud region are smoother texture and higher gray values than those of marginal clouds. So the window analysis method is used to select an appropriate cloud region. The window whose difference value between the sum of the gray-gradient co-occurrence matrix and fractal dimension is the biggest is chosen as the dense cloud region. The temperature gradient of the region, which is near typhoon center except typhoon eye, is small. Thus the gradient information is strengthened and is calculated by canny operator. Then we use a window to traverse the dense cloud region. If there is a closed curve, the region of curve is considered as the typhoon center region. Otherwise, the region in which there is the most texture intersection and the biggest density is considered as the typhoon center region. Finally, the geometric center of the center region is determined as the typhoon center location. The effectiveness is test by Chinese FY-2C stationary satellite cloud image. And the result is compared with the typhoon center location in the "tropical cyclone yearbook" which was compiled by Shanghai typhoon institute of China meteorological administration. Experimental results show that the high location accuracy can be obtained.

  1. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  2. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  3. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression. PMID:23589187

  4. Medical image retrieval and analysis by Markov random fields and multi-scale fractal dimension.

    PubMed

    Backes, André Ricardo; Gerhardinger, Leandro Cavaleri; Batista Neto, João do Espírito Santo; Bruno, Odemir Martinez

    2015-02-01

    Many Content-based Image Retrieval (CBIR) systems and image analysis tools employ color, shape and texture (in a combined fashion or not) as attributes, or signatures, to retrieve images from databases or to perform image analysis in general. Among these attributes, texture has turned out to be the most relevant, as it allows the identification of a larger number of images of a different nature. This paper introduces a novel signature which can be used for image analysis and retrieval. It combines texture with complexity extracted from objects within the images. The approach consists of a texture segmentation step, modeled as a Markov Random Field process, followed by the estimation of the complexity of each computed region. The complexity is given by a Multi-scale Fractal Dimension. Experiments have been conducted using an MRI database in both pattern recognition and image retrieval contexts. The results show the accuracy of the proposed method in comparison with other traditional texture descriptors and also indicate how the performance changes as the level of complexity is altered.

  5. Transform coding of stereo image residuals.

    PubMed

    Moellenhoff, M S; Maier, M W

    1998-01-01

    Stereo image compression is of growing interest because of new display technologies and the needs of telepresence systems. Compared to monoscopic image compression, stereo image compression has received much less attention. A variety of algorithms have appeared in the literature that make use of the cross-view redundancy in the stereo pair. Many of these use the framework of disparity-compensated residual coding, but concentrate on the disparity compensation process rather than the post compensation coding process. This paper studies specialized coding methods for the residual image produced by disparity compensation. The algorithms make use of theoretically expected and experimentally observed characteristics of the disparity-compensated stereo residual to select transforms and quantization methods. Performance is evaluated on mean squared error (MSE) and a stereo-unique metric based on image registration. Exploiting the directional characteristics in a discrete cosine transform (DCT) framework provides its best performance below 0.75 b/pixel for 8-b gray-scale imagery and below 2 b/pixel for 24-b color imagery, In the wavelet algorithm, roughly a 50% reduction in bit rate is possible by encoding only the vertical channel, where much of the stereo information is contained. The proposed algorithms do not incur substantial computational burden beyond that needed for any disparity-compensated residual algorithm. PMID:18276294

  6. Coded-aperture imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-11-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  7. Coded-aperture imaging in nuclear medicine

    NASA Technical Reports Server (NTRS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-01-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  8. MORPH-II, a software package for the analysis of scanning-electron-micrograph images for the assessment of the fractal dimension of exposed stone surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf

    2000-01-01

    Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.

  9. Hybrid subband image coding scheme using DWT, DPCM, and ADPCM

    NASA Astrophysics Data System (ADS)

    Oh, Kyung-Seak; Kim, Sung-Jin; Joo, Chang-Bok

    1998-07-01

    Subband image coding techniques have received considerable attention as a powerful source coding ones. These techniques provide good compression results, and also can be extended for progressive transmission and multiresolution analysis. In this paper, we propose a hybrid subband image coding scheme using DWT (discrete wavelet transform), DPCM (differential pulse code modulation), and ADPCM (adaptive DPCM). This scheme produces both simple, but significant, image compression and transmission coding.

  10. Fractal Bread.

    ERIC Educational Resources Information Center

    Esbenshade, Donald H., Jr.

    1991-01-01

    Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)

  11. Fast-neutron, coded-aperture imager

    NASA Astrophysics Data System (ADS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  12. Diagnostics of hemangioma by the methods of correlation and fractal analysis of laser microscopic images of blood plasma

    NASA Astrophysics Data System (ADS)

    Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.

    2011-09-01

    For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.

  13. Diagnostics of hemangioma by the methods of correlation and fractal analysis of laser microscopic images of blood plasma

    NASA Astrophysics Data System (ADS)

    Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.

    2012-01-01

    For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.

  14. Dual-sided coded-aperture imager

    DOEpatents

    Ziock, Klaus-Peter

    2009-09-22

    In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.

  15. Image coding using the directional ADPCM

    NASA Astrophysics Data System (ADS)

    Lien, Chen-Chang; Huang, Chang-Lin; Chang, I.-Chang

    1994-09-01

    This paper proposes a new ADPCM method for image coding called directional ADPCM which can remove more redundancy from the image signals than the conventional ADPCM. The conventional ADPCM calculates the two-dimensional prediction coefficients by using the correlation functions and solving the Yule-Walker equation. Actually, the quantities of correlation functions are replaced by the sample averages. Therefore, this solution will not be optimum. Our directional ADPCM utilizes the directional filters to obtain the energy distribution in four directions and then determines the four directional prediction coefficients. All the directional filters are designed by using the singular value decomposition (SVD) method and the two-dimensional Hilbert transform technique. In the experiments, we illustrate that the M.S.E. for the directional ADPCM is less than that of the conventional ADPCM.

  16. Featured Image: Tests of an MHD Code

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-09-01

    Creating the codes that are used to numerically model astrophysical systems takes a lot of work and a lot of testing! A new, publicly available moving-mesh magnetohydrodynamics (MHD) code, DISCO, is designed to model 2D and 3D orbital fluid motion, such as that of astrophysical disks. In a recent article, DISCO creator Paul Duffell (University of California, Berkeley) presents the code and the outcomes from a series of standard tests of DISCOs stability, accuracy, and scalability.From left to right and top to bottom, the test outputs shown above are: a cylindrical Kelvin-Helmholtz flow (showing off DISCOs numerical grid in 2D), a passive scalar in a smooth vortex (can DISCO maintain contact discontinuities?), a global look at the cylindrical Kelvin-Helmholtz flow, a Jupiter-mass planet opening a gap in a viscous disk, an MHD flywheel (a test of DISCOs stability), an MHD explosion revealing shock structures, an MHD rotor (a more challenging version of the explosion), a Flock 3D MRI test (can DISCO study linear growth of the magnetorotational instability in disks?), and a nonlinear 3D MRI test.Check out the gif below for a closer look at each of these images, or follow the link to the original article to see even more!CitationPaul C. Duffell 2016 ApJS 226 2. doi:10.3847/0067-0049/226/1/2

  17. The IRMA code for unique classification of medical images

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.; Schubert, Henning; Keysers, Daniel; Kohnen, Michael; Wein, Berthold B.

    2003-05-01

    Modern communication standards such as Digital Imaging and Communication in Medicine (DICOM) include non-image data for a standardized description of study, patient, or technical parameters. However, these tags are rather roughly structured, ambiguous, and often optional. In this paper, we present a mono-hierarchical multi-axial classification code for medical images and emphasize its advantages for content-based image retrieval in medical applications (IRMA). Our so called IRMA coding system consists of four axes with three to four positions, each in {0,...9,a,...,z}, where "0" denotes "unspecified" to determine the end of a path along an axis. In particular, the technical code (T) describes the imaging modality; the directional code (D) models body orientations; the anatomical code (A) refers to the body region examined; and the biological code (B) describes the biological system examined. Hence, the entire code results in a character string of not more than 13 characters (IRMA: TTTT - DDD - AAA - BBB). The code can be easily extended by introducing characters in certain code positions, e.g., if new modalities are introduced. In contrast to other approaches, mixtures of one- and two-literal code positions are avoided which simplifies automatic code processing. Furthermore, the IRMA code obviates ambiguities resulting from overlapping code elements within the same level. Although this code was originally designed to be used in the IRMA project, other use of it is welcome.

  18. Coding depth perception from image defocus.

    PubMed

    Supèr, Hans; Romeo, August

    2014-12-01

    As a result of the spider experiments in Nagata et al. (2012), it was hypothesized that the depth perception mechanisms of these animals should be based on how much images are defocused. In the present paper, assuming that relative chromatic aberrations or blur radii values are known, we develop a formulation relating the values of these cues to the actual depth distance. Taking into account the form of the resulting signals, we propose the use of latency coding from a spiking neuron obeying Izhikevich's 'simple model'. If spider jumps can be viewed as approximately parabolic, some estimates allow for a sensory-motor relation between the time to the first spike and the magnitude of the initial velocity of the jump.

  19. Coherent diffractive imaging using randomly coded masks

    SciTech Connect

    Seaberg, Matthew H.; D'Aspremont, Alexandre; Turner, Joshua J.

    2015-12-07

    We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even free electron laser experiments.

  20. New freeform NURBS imaging design code

    NASA Astrophysics Data System (ADS)

    Chrisp, Michael P.

    2014-12-01

    A new optical imaging design code for NURBS freeform surfaces is described, with reduced optimization cycle times due to its fast raytrace engine, and the ability to handle larger NURBS surfaces because of its improved optimization algorithms. This program, FANO (Fast Accurate NURBS Optimization), is then applied to an f/2 three mirror anastigmat design. Given the same optical design parameters, the optical system with NURBS freeform surfaces has an average r.m.s. spot size of 4 microns. This spot size is six times smaller than the conventional aspheric design, showing that the use of NURBS freeform surfaces can improve the performance of three mirror anastigmats for the next generation of smaller pixel size, larger format detector arrays.

  1. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  2. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight.

    PubMed

    Loh, N D; Hampton, C Y; Martin, A V; Starodub, D; Sierra, R G; Barty, A; Aquila, A; Schulz, J; Lomb, L; Steinbrener, J; Shoeman, R L; Kassemeyer, S; Bostedt, C; Bozek, J; Epp, S W; Erk, B; Hartmann, R; Rolles, D; Rudenko, A; Rudek, B; Foucar, L; Kimmel, N; Weidenspointner, G; Hauser, G; Holl, P; Pedersoli, E; Liang, M; Hunter, M S; Hunter, M M; Gumprecht, L; Coppola, N; Wunderer, C; Graafsma, H; Maia, F R N C; Ekeberg, T; Hantke, M; Fleckenstein, H; Hirsemann, H; Nass, K; White, T A; Tobias, H J; Farquar, G R; Benner, W H; Hau-Riege, S P; Reich, C; Hartmann, A; Soltau, H; Marchesini, S; Bajt, S; Barthelmess, M; Bucksbaum, P; Hodgson, K O; Strüder, L; Ullrich, J; Frank, M; Schlichting, I; Chapman, H N; Bogan, M J

    2012-06-28

    The morphology of micrometre-size particulate matter is of critical importance in fields ranging from toxicology to climate science, yet these properties are surprisingly difficult to measure in the particles' native environment. Electron microscopy requires collection of particles on a substrate; visible light scattering provides insufficient resolution; and X-ray synchrotron studies have been limited to ensembles of particles. Here we demonstrate an in situ method for imaging individual sub-micrometre particles to nanometre resolution in their native environment, using intense, coherent X-ray pulses from the Linac Coherent Light Source free-electron laser. We introduced individual aerosol particles into the pulsed X-ray beam, which is sufficiently intense that diffraction from individual particles can be measured for morphological analysis. At the same time, ion fragments ejected from the beam were analysed using mass spectrometry, to determine the composition of single aerosol particles. Our results show the extent of internal dilation symmetry of individual soot particles subject to non-equilibrium aggregation, and the surprisingly large variability in their fractal dimensions. More broadly, our methods can be extended to resolve both static and dynamic morphology of general ensembles of disordered particles. Such general morphology has implications in topics such as solvent accessibilities in proteins, vibrational energy transfer by the hydrodynamic interaction of amino acids, and large-scale production of nanoscale structures by flame synthesis.

  3. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight.

    PubMed

    Loh, N D; Hampton, C Y; Martin, A V; Starodub, D; Sierra, R G; Barty, A; Aquila, A; Schulz, J; Lomb, L; Steinbrener, J; Shoeman, R L; Kassemeyer, S; Bostedt, C; Bozek, J; Epp, S W; Erk, B; Hartmann, R; Rolles, D; Rudenko, A; Rudek, B; Foucar, L; Kimmel, N; Weidenspointner, G; Hauser, G; Holl, P; Pedersoli, E; Liang, M; Hunter, M S; Hunter, M M; Gumprecht, L; Coppola, N; Wunderer, C; Graafsma, H; Maia, F R N C; Ekeberg, T; Hantke, M; Fleckenstein, H; Hirsemann, H; Nass, K; White, T A; Tobias, H J; Farquar, G R; Benner, W H; Hau-Riege, S P; Reich, C; Hartmann, A; Soltau, H; Marchesini, S; Bajt, S; Barthelmess, M; Bucksbaum, P; Hodgson, K O; Strüder, L; Ullrich, J; Frank, M; Schlichting, I; Chapman, H N; Bogan, M J

    2012-06-28

    The morphology of micrometre-size particulate matter is of critical importance in fields ranging from toxicology to climate science, yet these properties are surprisingly difficult to measure in the particles' native environment. Electron microscopy requires collection of particles on a substrate; visible light scattering provides insufficient resolution; and X-ray synchrotron studies have been limited to ensembles of particles. Here we demonstrate an in situ method for imaging individual sub-micrometre particles to nanometre resolution in their native environment, using intense, coherent X-ray pulses from the Linac Coherent Light Source free-electron laser. We introduced individual aerosol particles into the pulsed X-ray beam, which is sufficiently intense that diffraction from individual particles can be measured for morphological analysis. At the same time, ion fragments ejected from the beam were analysed using mass spectrometry, to determine the composition of single aerosol particles. Our results show the extent of internal dilation symmetry of individual soot particles subject to non-equilibrium aggregation, and the surprisingly large variability in their fractal dimensions. More broadly, our methods can be extended to resolve both static and dynamic morphology of general ensembles of disordered particles. Such general morphology has implications in topics such as solvent accessibilities in proteins, vibrational energy transfer by the hydrodynamic interaction of amino acids, and large-scale production of nanoscale structures by flame synthesis. PMID:22739316

  4. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  5. PynPoint code for exoplanet imaging

    NASA Astrophysics Data System (ADS)

    Amara, A.; Quanz, S. P.; Akeret, J.

    2015-04-01

    We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).

  6. A new set of wavelet- and fractals-based features for Gleason grading of prostate cancer histopathology images

    NASA Astrophysics Data System (ADS)

    Mosquera Lopez, Clara; Agaian, Sos

    2013-02-01

    Prostate cancer detection and staging is an important step towards patient treatment selection. Advancements in digital pathology allow the application of new quantitative image analysis algorithms for computer-assisted diagnosis (CAD) on digitized histopathology images. In this paper, we introduce a new set of features to automatically grade pathological images using the well-known Gleason grading system. The goal of this study is to classify biopsy images belonging to Gleason patterns 3, 4, and 5 by using a combination of wavelet and fractal features. For image classification we use pairwise coupling Support Vector Machine (SVM) classifiers. The accuracy of the system, which is close to 97%, is estimated through three different cross-validation schemes. The proposed system offers the potential for automating classification of histological images and supporting prostate cancer diagnosis.

  7. Coded source neutron imaging at the PULSTAR reactor

    SciTech Connect

    Xiao, Ziyu; Mishra, Kaushal; Hawari, Ayman; Bingham, Philip R; Bilheux, Hassina Z; Tobin Jr, Kenneth William

    2011-01-01

    A neutron imaging facility is located on beam-tube No.5 of the 1-MW PULSTAR reactor at North Carolina State University. An investigation of high resolution imaging using the coded source imaging technique has been initiated at the facility. Coded imaging uses a mosaic of pinholes to encode an aperture, thus generating an encoded image of the object at the detector. To reconstruct the image data received by the detector, the corresponding decoding patterns are used. The optimized design of coded mask is critical for the performance of this technique and will depend on the characteristics of the imaging beam. In this work, a 34 x 38 uniformly redundant array (URA) coded aperture system is studied for application at the PULSTAR reactor neutron imaging facility. The URA pattern was fabricated on a 500 ?m gadolinium sheet. Simulations and experiments with a pinhole object have been conducted using the Gd URA and the optimized beam line.

  8. Fractals in biology and medicine

    NASA Technical Reports Server (NTRS)

    Havlin, S.; Buldyrev, S. V.; Goldberger, A. L.; Mantegna, R. N.; Ossadnik, S. M.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    Our purpose is to describe some recent progress in applying fractal concepts to systems of relevance to biology and medicine. We review several biological systems characterized by fractal geometry, with a particular focus on the long-range power-law correlations found recently in DNA sequences containing noncoding material. Furthermore, we discuss the finding that the exponent alpha quantifying these long-range correlations ("fractal complexity") is smaller for coding than for noncoding sequences. We also discuss the application of fractal scaling analysis to the dynamics of heartbeat regulation, and report the recent finding that the normal heart is characterized by long-range "anticorrelations" which are absent in the diseased heart.

  9. A Double-Minded Fractal

    ERIC Educational Resources Information Center

    Simoson, Andrew J.

    2009-01-01

    This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)

  10. Coded excitation plane wave imaging for shear wave motion detection.

    PubMed

    Song, Pengfei; Urban, Matthew W; Manduca, Armando; Greenleaf, James F; Chen, Shigao

    2015-07-01

    Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave SNR compared with conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2 to 4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (body mass index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue.

  11. Image analysis of human corneal endothelial cells based on fractal theory

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi; Luo, Qingming; Zeng, Shaoqun; Zhang, Xinyu; Huang, Dexiu; Chen, Weiguo

    1999-09-01

    A fast method is developed to quantitatively characterize the shape of human corneal endothelial cells with fractal theory and applied to analyze microscopic photographs of human corneal endothelial cells. The results show that human corneal endothelial cells possess the third characterization parameter-- fractal dimension, besides another two characterization parameter (its size and shape). Compared with tradition method, this method has many advantages, such as automatism, speediness, parallel processing and can be used to analyze large numbers of endothelial cells, the obtained values are statistically significant, it offers a new approach for clinic diagnosis of endothelial cells.

  12. Fractal Feature Analysis Of Beef Marblingpatterns

    NASA Astrophysics Data System (ADS)

    Chen, Kunjie; Qin, Chunfang

    The purpose of this study is to investigate fractal behavior of beef marbling patterns and to explore relationships between fractal dimensions and marbling scores. Authors firstly extracted marbling images from beef rib-eye crosssection images using computer image processing technologies and then implemented the fractal analysis on these marbling images based on the pixel covering method. Finally box-counting fractal dimension (BFD) and informational fractal dimension (IFD) of one hundred and thirty-five beef marbling images were calculated and plotted against the beef marbling scores. The results showed that all beef marbling images exhibit fractal behavior over the limited range of scales accessible to analysis. Furthermore, their BFD and IFD are closely related to the score of beef marbling, suggesting that fractal analyses can provide us a potential tool to calibrate the score of beef marbling.

  13. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-04-01

    Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.

  14. Practicing the Code of Ethics, finding the image of God.

    PubMed

    Hoglund, Barbara A

    2013-01-01

    The Code of Ethics for Nurses gives a professional obligation to practice in a compassionate and respectful way that is unaffected by the attributes of the patient. This article explores the concept "made in the image of God" and the complexities inherent in caring for those perceived as exhibiting distorted images of God. While the Code provides a professional standard consistent with a biblical worldview, human nature impacts the ability to consistently act congruently with the Code. Strategies and nursing interventions that support development of practice from a biblical worldview and the Code of Ethics for Nurses are presented.

  15. The fractal menger sponge and Sierpinski carpet as models for reservoir rock/pore systems: I. ; Theory and image analysis of Sierpinski carpets

    SciTech Connect

    Garrison, J.R., Jr.; Pearn, W.C.; von Rosenberg, D. W. )

    1992-01-01

    In this paper reservoir rock/pore systems are considered natural fractal objects and modeled as and compared to the regular fractal Menger Sponge and Sierpinski Carpet. The physical properties of a porous rock are, in part, controlled by the geometry of the pore system. The rate at which a fluid or electrical current can travel through the pore system of a rock is controlled by the path along which it must travel. This path is a subset of the overall geometry of the pore system. Reservoir rocks exhibit self-similarity over a range of length scales suggesting that fractal geometry offers a means of characterizing these complex objects. The overall geometry of a rock/pore system can be described, conveniently and concisely, in terms of effective fractal dimensions. The rock/pore system is modeled as the fractal Menger Sponge. A cross section through the rock/pore system, such as an image of a thin-section of a rock, is modeled as the fractal Sierpinski Carpet, which is equivalent to the face of the Menger Sponge.

  16. Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications

    SciTech Connect

    Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth

    2013-06-01

    Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.

  17. Progressive Image Coding by Hierarchical Linear Approximation.

    ERIC Educational Resources Information Center

    Wu, Xiaolin; Fang, Yonggang

    1994-01-01

    Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…

  18. Image compression using a novel edge-based coding algorithm

    NASA Astrophysics Data System (ADS)

    Keissarian, Farhad; Daemi, Mohammad F.

    2001-08-01

    In this paper, we present a novel edge-based coding algorithm for image compression. The proposed coding scheme is the predictive version of the original algorithm, which we presented earlier in literature. In the original version, an image is block coded according to the level of visual activity of individual blocks, following a novel edge-oriented classification stage. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly. In the present study, we extend and improve the performance of the existing technique by exploiting the expected spatial redundancy across the neighboring blocks. Satisfactory coded images at competitive bit rate with other block-based coding techniques have been obtained.

  19. Target Detection Using Fractal Geometry

    NASA Technical Reports Server (NTRS)

    Fuller, J. Joseph

    1991-01-01

    The concepts and theory of fractal geometry were applied to the problem of segmenting a 256 x 256 pixel image so that manmade objects could be extracted from natural backgrounds. The two most important measurements necessary to extract these manmade objects were fractal dimension and lacunarity. Provision was made to pass the manmade portion to a lookup table for subsequent identification. A computer program was written to construct cloud backgrounds of fractal dimensions which were allowed to vary between 2.2 and 2.8. Images of three model space targets were combined with these backgrounds to provide a data set for testing the validity of the approach. Once the data set was constructed, computer programs were written to extract estimates of the fractal dimension and lacunarity on 4 x 4 pixel subsets of the image. It was shown that for clouds of fractal dimension 2.7 or less, appropriate thresholding on fractal dimension and lacunarity yielded a 64 x 64 edge-detected image with all or most of the cloud background removed. These images were enhanced by an erosion and dilation to provide the final image passed to the lookup table. While the ultimate goal was to pass the final image to a neural network for identification, this work shows the applicability of fractal geometry to the problems of image segmentation, edge detection and separating a target of interest from a natural background.

  20. The transience of virtual fractals.

    PubMed

    Taylor, R P

    2012-01-01

    Artists have a long and fruitful tradition of exploiting electronic media to convert static images into dynamic images that evolve with time. Fractal patterns serve as an example: computers allow the observer to zoom in on virtual images and so experience the endless repetition of patterns in a matter that cannot be matched using static images. This year's featured cover artist, Susan Lowedermilk, instead plans to employ persistence of human vision to bring virtual fractals to life. This will be done by incorporating her prints of fractal patterns into zoetropes and phenakistoscopes.

  1. Study of fractal dimension in chest images using normal and interstitial lung disease cases

    NASA Astrophysics Data System (ADS)

    Tucker, Douglas M.; Correa, Jose L.; Souto, Miguel; Malagari, Katerina S.

    1993-09-01

    A quantitative computerized method which provides accurate discrimination between chest radiographs with positive findings of interstitial disease patterns and normal chest radiographs may increase the efficacy of radiologic screening of the chest and the utility of digital radiographic systems. This report is a comparison of fractal dimension measured in normal chest radiographs and in radiographs with abnormal lungs having reticular, nodular, reticulonodular and linear patterns of interstitial disease. Six regions of interest (ROI's) from each of 33 normal chest radiographs and 33 radiographs with positive findings of interstitial disease were studied. Results indicate that there is a statistically significant difference between the distribution of the fractal dimension in normal radiographs and radiographs where disease is present.

  2. Quantum image Gray-code and bit-plane scrambling

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Sun, Ya-Juan; Fan, Ping

    2015-05-01

    With the rapid development of multimedia technology, the image scrambling for information hiding and digital watermarking is crucial. But, in quantum image processing field, the study on image scrambling is still few. Several quantum image scrambling schemes are basically position space scrambling strategies; however, the quantum image scrambling focused on the color space does not exist. Therefore, in this paper, the quantum image Gray-code and bit-plane (GB) scrambling scheme, an entire color space scrambling strategy, is proposed boldly. On the strength of a quantum image representation NEQR, several different quantum scrambling methods using GB knowledge are designed. Not only can they change the histogram distribution of the image dramatically, some designed schemes can almost make the image histogram flush, enhance the anti-attack ability of digital image, but also their cost or complexity is very low. The simulation experiments result also shows a good performance and indicates the particular advantage of GB scrambling in quantum image processing field.

  3. Code-modulated interferometric imaging system using phased arrays

    NASA Astrophysics Data System (ADS)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  4. Exploring Fractals.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1991-01-01

    Explores the subject of fractal geometry focusing on the occurrence of fractal-like shapes in the natural world. Topics include iterated functions, chaos theory, the Lorenz attractor, logistic maps, the Mandelbrot set, and mini-Mandelbrot sets. Provides appropriate computer algorithms, as well as further sources of information. (JJK)

  5. Aggregating local image descriptors into compact codes.

    PubMed

    Jégou, Hervé; Perronnin, Florent; Douze, Matthijs; Sánchez, Jorge; Pérez, Patrick; Schmid, Cordelia

    2012-09-01

    This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.

  6. Exact mapping from singular value spectrum of a class of fractal images to entanglement spectrum of one-dimensional free fermions

    SciTech Connect

    Matsueda, Hiroaki; Lee, Ching Hua

    2015-03-10

    We examine singular value spectrum of a class of two-dimensional fractal images. We find that the spectra can be mapped onto entanglement spectra of free fermions in one dimension. This exact mapping tells us that the singular value decomposition is a way of detecting a holographic relation between classical and quantum systems.

  7. Analysis for simplified optics coma effection on spectral image inversion of coded aperture spectral imager

    NASA Astrophysics Data System (ADS)

    Liu, Yangyang; Lv, Qunbo; Li, Weiyan; Xiangli, Bin

    2015-09-01

    As a novel spectrum imaging technology was developed recent years, push-broom coded aperture spectral imaging (PCASI) has the advantages of high throughput, high SNR, high stability etc. This coded aperture spectral imaging utilizes fixed code templates and push-broom mode, which can realize the high-precision reconstruction of spatial and spectral information. But during optical lens designing, manufacturing and debugging, it is inevitably exist some minor coma errors. Even minor coma errors can reduce image quality. In this paper, we simulated the system optical coma error's influence to the quality of reconstructed image, analyzed the variant of the coded aperture in different optical coma effect, then proposed an accurate curve of image quality and optical coma quality in 255×255 size code template, which provide important references for design and development of push-broom coded aperture spectrometer.

  8. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  9. Coded aperture imaging for fluorescent x-rays

    SciTech Connect

    Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.

    2014-06-15

    We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.

  10. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  11. Imaging The Genetic Code of a Virus

    NASA Astrophysics Data System (ADS)

    Graham, Jenna; Link, Justin

    2013-03-01

    Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University

  12. An investigation of dehazing effects on image and video coding.

    PubMed

    Gibson, Kristofor B; Võ, Dung T; Nguyen, Truong Q

    2012-02-01

    This paper makes an investigation of the dehazing effects on image and video coding for surveillance systems. The goal is to achieve good dehazed images and videos at the receiver while sustaining low bitrates (using compression) in the transmission pipeline. At first, this paper proposes a novel method for single-image dehazing, which is used for the investigation. It operates at a faster speed than current methods and can avoid halo effects by using the median operation. We then consider the dehazing effects in compression by investigating the coding artifacts and motion estimation in cases of applying any dehazing method before or after compression. We conclude that better dehazing performance with fewer artifacts and better coding efficiency is achieved when the dehazing is applied before compression. Simulations for Joint Photographers Expert Group images in addition to subjective and objective tests with H.264 compressed sequences validate our conclusion. PMID:21896391

  13. Fractal analysis for assessing tumour grade in microscopic images of breast tissue

    NASA Astrophysics Data System (ADS)

    Tambasco, Mauro; Costello, Meghan; Newcomb, Chris; Magliocco, Anthony M.

    2007-03-01

    In 2006, breast cancer is expected to continue as the leading form of cancer diagnosed in women, and the second leading cause of cancer mortality in this group. A method that has proven useful for guiding the choice of treatment strategy is the assessment of histological tumor grade. The grading is based upon the mitosis count, nuclear pleomorphism, and tubular formation, and is known to be subject to inter-observer variability. Since cancer grade is one of the most significant predictors of prognosis, errors in grading can affect patient management and outcome. Hence, there is a need to develop a breast cancer-grading tool that is minimally operator dependent to reduce variability associated with the current grading system, and thereby reduce uncertainty that may impact patient outcome. In this work, we explored the potential of a computer-based approach using fractal analysis as a quantitative measure of cancer grade for breast specimens. More specifically, we developed and optimized computational tools to compute the fractal dimension of low- versus high-grade breast sections and found them to be significantly different, 1.3+/-0.10 versus 1.49+/-0.10, respectively (Kolmogorov-Smirnov test, p<0.001). These results indicate that fractal dimension (a measure of morphologic complexity) may be a useful tool for demarcating low- versus high-grade cancer specimens, and has potential as an objective measure of breast cancer grade. Such prognostic value could provide more sensitive and specific information that would reduce inter-observer variability by aiding the pathologist in grading cancers.

  14. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  15. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian; Vetter, Kai M.

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  16. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  17. Code aperture optimization for spectrally agile compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  18. Wavelet image coding with parametric thresholding: application to JPEG2000

    NASA Astrophysics Data System (ADS)

    Zaid, Azza O.; Olivier, Christian; Marmoiton, Francois

    2003-05-01

    With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of image coding, the latest ISO/IEC image compression standard, JPEG2000, has been developed. In part II of the standard, the Wavelet Trellis Coded Quantization (WTCQ) algorithm was adopted. It has been proved that this quantization design provides subjective image quality superior to other existing quantization techniques. In this paper we are aiming to improve the rate-distortion performance of WTCQ, by incorporating a thresholding process in JPEG2000 coding chain. The threshold decisions are derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD). The threshold value depends on the parametric model estimation of the subband wavelet coefficient distribution. Our algorithm approaches the lowest possible memory usage by using line-based wavelet transform and a scan-based bit allocation technique. In our work, we investigate an efficient way to apply the TCQ to wavelet image coding with regard to both the computational complexity and the compression performance. Experimental results show that the proposed algorithm performs competitively with the best available coding algorithms reported in the literature in terms quality performance.

  19. Split field coding: low complexity error-resilient entropy coding for image compression

    NASA Astrophysics Data System (ADS)

    Meany, James J.; Martens, Christopher J.

    2008-08-01

    In this paper, we describe split field coding, an approach for low complexity, error-resilient entropy coding which splits code words into two fields: a variable length prefix and a fixed length suffix. Once a prefix has been decoded correctly, then the associated fixed length suffix is error-resilient, with bit errors causing no loss of code word synchronization and only a limited amount of distortion on the decoded value. When the fixed length suffixes are segregated to a separate block, this approach becomes suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. Split field coding is demonstrated in the context of a wavelet-based image codec, with examples of various error resilience properties, and comparisons to the rate-distortion and computational performance of JPEG 2000.

  20. IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.

    SciTech Connect

    VANIER,P.E.

    2003-08-03

    A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.

  1. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  2. Barker-coded excitation in ophthalmological ultrasound imaging

    PubMed Central

    Zhou, Sheng; Wang, Xiao-Chun; Yang, Jun; Ji, Jian-Jun; Wang, Yan-Qun

    2014-01-01

    High-frequency ultrasound is an attractive means to obtain fine-resolution images of biological tissues for ophthalmologic imaging. To solve the tradeoff between axial resolution and detection depth, existing in the conventional single-pulse excitation, this study develops a new method which uses 13-bit Barker-coded excitation and a mismatched filter for high-frequency ophthalmologic imaging. A novel imaging platform has been designed after trying out various encoding methods. The simulation and experiment result show that the mismatched filter can achieve a much higher out signal main to side lobe which is 9.7 times of the matched one. The coded excitation method has significant advantages over the single-pulse excitation system in terms of a lower MI, a higher resolution, and a deeper detection depth, which improve the quality of ophthalmic tissue imaging. Therefore, this method has great values in scientific application and medical market. PMID:25356093

  3. Imaging plasmas with coded aperture methods instead of conventional optics

    NASA Astrophysics Data System (ADS)

    Wongwaitayakornkul, Pakorn; Bellan, Paul

    2012-10-01

    The spheromak and astrophysical jet plasma at Caltech emits localized EUV and X-rays associated with magnetic reconnection. However, conventional optics does not work for EUV or X-rays due to their high energy. Coded aperture imaging is an alternative method that will work at these energies. The technique has been used in spacecraft for high-energy radiation and also in nuclear medicine. Coded aperture imaging works by having patterns of materials opaque to various wavelengths block and unblock radiation in a known pattern. The original image can be determined from a numerical procedure that inverts information from the coded shadow on the detector plane. A one-dimensional coded mask has been designed and constructed for visualization of the evolution of a 1-d cross-section image of the Caltech plasmas. The mask is constructed from Hadamard matrices. Arrays of photo-detectors will be assembled to obtain an image of the plasmas in the visible light range. The experiment will ultimately be re-configured to image X-ray and EUV radiation.

  4. Coded access optical sensor (CAOS) imager and applications

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2016-04-01

    Starting in 2001, we proposed and extensively demonstrated (using a DMD: Digital Micromirror Device) an agile pixel Spatial Light Modulator (SLM)-based optical imager based on single pixel photo-detection (also called a single pixel camera) that is suited for operations with both coherent and incoherent light across broad spectral bands. This imager design operates with the agile pixels programmed in a limited SNR operations starring time-multiplexed mode where acquisition of image irradiance (i.e., intensity) data is done one agile pixel at a time across the SLM plane where the incident image radiation is present. Motivated by modern day advances in RF wireless, optical wired communications and electronic signal processing technologies and using our prior-art SLM-based optical imager design, described using a surprisingly simple approach is a new imager design called Coded Access Optical Sensor (CAOS) that has the ability to alleviate some of the key prior imager fundamental limitations. The agile pixel in the CAOS imager can operate in different time-frequency coding modes like Frequency Division Multiple Access (FDMA), Code-Division Multiple Access (CDMA), and Time Division Multiple Access (TDMA). Data from a first CAOS camera demonstration is described along with novel designs of CAOS-based optical instruments for various applications.

  5. Lensless coded aperture imaging with separable doubly Toeplitz masks

    NASA Astrophysics Data System (ADS)

    DeWeert, Michael J.; Farm, Brian P.

    2014-05-01

    In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within reasonable weight and volume. One solution is to use coded apertures -opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, the images can be decoded, and a clear image computed. Recently, computational imaging and the search for means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, MURAs (Modified Uniformly Redundant Arrays) are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: 1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and 2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. This paper presents a new class of coded apertures, Separable Doubly-Toeplitz masks, which are efficiently decodable, even for very large images -orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-lightmodulators. Imaging experiments confirmed the effectiveness of Separable Doubly-Toeplitz masks - images collected in natural light of extended outdoor scenes are rendered clearly.

  6. Lossless coding using predictors and VLCs optimized for each image

    NASA Astrophysics Data System (ADS)

    Matsuda, Ichiro; Shirai, Noriyuki; Itoh, Susumu

    2003-06-01

    This paper proposes an efficient lossless coding scheme for still images. The scheme utilizes an adaptive prediction technique where a set of linear predictors are designed for a given image and an appropriate predictor is selected from the set block-by-block. The resulting prediction errors are encoded using context-adaptive variable-length codes (VLCs). Context modeling, or adaptive selection of VLCs, is carried out pel-by-pel and the VLC assigned to each context is designed on a probability distribution model of the prediction errors. In order to improve coding efficiency, a generalized Gaussian function is used as the model for each context. Moreover, not only the predictors but also parameters of the probability distribution models are iteratively optimized for each image so that a coding rate of the prediction errors can have a minimum. Experimental results show that the proposed coding scheme attains comparable coding performance to the state-of-the-art TMW scheme with much lower complexity in the decoding process.

  7. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  8. Lava flows are fractals

    NASA Technical Reports Server (NTRS)

    Bruno, B. C.; Taylor, G. J.; Rowland, S. K.; Lucey, P. G.; Self, S.

    1992-01-01

    Results are presented of a preliminary investigation of the fractal nature of the plan-view shapes of lava flows in Hawaii (based on field measurements and aerial photographs), as well as in Idaho and the Galapagos Islands (using aerial photographs only). The shapes of the lava flow margins are found to be fractals: lava flow shape is scale-invariant. This observation suggests that nonlinear forces are operating in them because nonlinear systems frequently produce fractals. A'a and pahoehoe flows can be distinguished by their fractal dimensions (D). The majority of the a'a flows measured have D between 1.05 and 1.09, whereas the pahoehoe flows generally have higher D (1.14-1.23). The analysis is extended to other planetary bodies by measuring flows from orbital images of Venus, Mars, and the moon. All are fractal and have D consistent with the range of terrestrial a'a and have D consistent with the range of terrestrial a'a and pahoehoe values.

  9. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  10. Medical image registration using sparse coding of image patches.

    PubMed

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing. PMID:27085311

  11. Medical image registration using sparse coding of image patches.

    PubMed

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing.

  12. JJ1017 image examination order codes: standardized codes supplementary to DICOM for imaging modality, region, and direction

    NASA Astrophysics Data System (ADS)

    Kimura, Michio; Kuranishi, Makoto; Sukenobu, Yoshiharu; Watanabe, Hiroki; Nakajima, Takashi; Morimura, Shinya; Kabata, Shun

    2002-05-01

    The DICOM standard includes non-image data information such as image study ordering data and performed procedure data, which are used for sharing information between HIS/RIS/PACS/modalities, which is essential for IHE. In order to bring such parts of the DICOM standard into force in Japan, a joint committee of JIRA and JAHIS (vendor associations) established JJ1017 management guideline. It specifies, for example, which items are legally required in Japan while remaining optional in the DICOM standard. Then, what should be used for the examination type, regional, and directional codes? Our investigation revealed that DICOM tables do not include items that are sufficiently detailed for use in Japan. This is because radiology departments (radiologists) in the US exercise greater discretion in image examination than in Japan, and the contents of orders from requesting physicians do not include the extra details used in Japan. Therefore, we have generated the JJ1017 code for these 3 codes for use based on the JJ1017 guidelines. The stem part of the JJ1017 code partially employs the DICOM codes in order to remain in line with the DICOM standard. JJ1017 codes are to be included not only in IHE-J specifications, also in Ministry recommendations of health data exchange.

  13. Implementation and operation of three fractal measurement algorithms for analysis of remote-sensing data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, Dale A.; Lam, Nina S.-N.

    1993-01-01

    Fractal geometry is increasingly becoming a useful tool for modeling natural phenomena. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. The purpose of this paper is to introduce and implement three algorithms in C code for deriving fractal measurement from remotely sensed data. These three methods are: the line-divider method, the variogram method, and the triangular prism method. Remote-sensing data acquired by NASA's Calibrated Airborne Multispectral Scanner (CAMS) are used to compute the fractal dimension using each of the three methods. These data were obtained as a 30 m pixel spatial resolution over a portion of western Puerto Rico in January 1990. A description of the three methods, their implementation in PC-compatible environment, and some results of applying these algorithms to remotely sensed image data are presented.

  14. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  15. Computer-Aided Image Analysis and Fractal Synthesis in the Quantitative Evaluation of Tumor Aggressiveness in Prostate Carcinomas

    PubMed Central

    Waliszewski, Przemyslaw

    2016-01-01

    The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken’s embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed

  16. Computer-Aided Image Analysis and Fractal Synthesis in the Quantitative Evaluation of Tumor Aggressiveness in Prostate Carcinomas.

    PubMed

    Waliszewski, Przemyslaw

    2016-01-01

    The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken's embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed

  17. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  18. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-09-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  19. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-10-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  20. Lung cancer-a fractal viewpoint.

    PubMed

    Lennon, Frances E; Cianci, Gianguido C; Cipriani, Nicole A; Hensing, Thomas A; Zhang, Hannah J; Chen, Chin-Tu; Murgu, Septimiu D; Vokes, Everett E; Vannier, Michael W; Salgia, Ravi

    2015-11-01

    Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed.

  1. Lung cancer—a fractal viewpoint

    PubMed Central

    Lennon, Frances E.; Cianci, Gianguido C.; Cipriani, Nicole A.; Hensing, Thomas A.; Zhang, Hannah J.; Chen, Chin-Tu; Murgu, Septimiu D.; Vokes, Everett E.; W. Vannier, Michael; Salgia, Ravi

    2016-01-01

    Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed. PMID:26169924

  2. Interplay between JPEG-2000 image coding and quality estimation

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2013-03-01

    Image quality and utility estimators aspire to quantify the perceptual resemblance and the usefulness of a distorted image when compared to a reference natural image, respectively. Image-coders, such as JPEG-2000, traditionally aspire to allocate the available bits to maximize the perceptual resemblance of the compressed image when compared to a reference uncompressed natural image. Specifically, this can be accomplished by allocating the available bits to minimize the overall distortion, as computed by a given quality estimator. This paper applies five image quality and utility estimators, SSIM, VIF, MSE, NICE and GMSE, within a JPEG-2000 encoder for rate-distortion optimization to obtain new insights on how to improve JPEG-2000 image coding for quality and utility applications, as well as to improve the understanding about the quality and utility estimators used in this work. This work develops a rate-allocation algorithm for arbitrary quality and utility estimators within the Post- Compression Rate-Distortion Optimization (PCRD-opt) framework in JPEG-2000 image coding. Performance of the JPEG-2000 image coder when used with a variety of utility and quality estimators is then assessed. The estimators fall into two broad classes, magnitude-dependent (MSE, GMSE and NICE) and magnitudeindependent (SSIM and VIF). They further differ on their use of the low-frequency image content in computing their estimates. The impact of these computational differences is analyzed across a range of images and bit rates. In general, performance of the JPEG-2000 coder below 1.6 bits/pixel with any of these estimators is highly content dependent, with the most relevant content being the amount of texture in an image and whether the strongest gradients in an image correspond to the main contours of the scene. Above 1.6 bits/pixel, all estimators produce visually equivalent images. As a result, the MSE estimator provides the most consistent performance across all images, while specific

  3. Investigation into How 8th Grade Students Define Fractals

    ERIC Educational Resources Information Center

    Karakus, Fatih

    2015-01-01

    The analysis of 8th grade students' concept definitions and concept images can provide information about their mental schema of fractals. There is limited research on students' understanding and definitions of fractals. Therefore, this study aimed to investigate the elementary students' definitions of fractals based on concept image and concept…

  4. Colored coded-apertures for spectral image unmixing

    NASA Astrophysics Data System (ADS)

    Vargas, Hector M.; Arguello Fuentes, Henry

    2015-10-01

    Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.

  5. Inter-subject neural code converter for visual image representation.

    PubMed

    Yamada, Kentaro; Miyawaki, Yoichi; Kamitani, Yukiyasu

    2015-06-01

    Brain activity patterns differ from person to person, even for an identical stimulus. In functional brain mapping studies, it is important to align brain activity patterns between subjects for group statistical analyses. While anatomical templates are widely used for inter-subject alignment in functional magnetic resonance imaging (fMRI) studies, they are not sufficient to identify the mapping between voxel-level functional responses representing specific mental contents. Recent work has suggested that statistical learning methods could be used to transform individual brain activity patterns into a common space while preserving representational contents. Here, we propose a flexible method for functional alignment, "neural code converter," which converts one subject's brain activity pattern into another's representing the same content. The neural code converter was designed to learn statistical relationships between fMRI activity patterns of paired subjects obtained while they saw an identical series of stimuli. It predicts the signal intensity of individual voxels of one subject from a pattern of multiple voxels of the other subject. To test this method, we used fMRI activity patterns measured while subjects observed visual images consisting of random and structured patches. We show that fMRI activity patterns for visual images not used for training the converter could be predicted from those of another subject where brain activity was recorded for the same stimuli. This confirms that visual images can be accurately reconstructed from the predicted activity patterns alone. Furthermore, we show that a classifier trained only on predicted fMRI activity patterns could accurately classify measured fMRI activity patterns. These results demonstrate that the neural code converter can translate neural codes between subjects while preserving contents related to visual images. While this method is useful for functional alignment and decoding, it may also provide a basis for

  6. Construction of fractal nanostructures based on Kepler-Shubnikov nets

    SciTech Connect

    Ivanov, V. V. Talanov, V. M.

    2013-05-15

    A system of information codes for deterministic fractal lattices and sets of multifractal curves is proposed. An iterative modular design was used to obtain a series of deterministic fractal lattices with generators in the form of fragments of 2D structures and a series of multifractal curves (based on some Kepler-Shubnikov nets) having Cantor set properties. The main characteristics of fractal structures and their lacunar spectra are determined. A hierarchical principle is formulated for modules of regular fractal structures.

  7. Coded aperture imaging with a HURA coded aperture and a discrete pixel detector

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    An investigation into the gamma ray imaging properties of a hexagonal uniformly redundant array (HURA) coded aperture and a detector consisting of discrete pixels constituted the major research effort. Such a system offers distinct advantages for the development of advanced gamma ray astronomical telescopes in terms of the provision of high quality sky images in conjunction with an imager plane which has the capacity to reject background noise efficiently. Much of the research was performed as part of the European Space Agency (ESA) sponsored study into a prospective space astronomy mission, GRASP. The effort involved both computer simulations and a series of laboratory test images. A detailed analysis of the system point spread function (SPSF) of imaging planes which incorporate discrete pixel arrays is presented and the imaging quality quantified in terms of the signal to noise ratio (SNR). Computer simulations of weak point sources in the presence of detector background noise were also investigated. Theories developed during the study were evaluated by a series of experimental measurements with a Co-57 gamma ray point source, an Anger camera detector, and a rotating HURA mask. These tests were complemented by computer simulations designed to reproduce, as close as possible, the experimental conditions. The 60 degree antisymmetry property of HURA's was also employed to remove noise due to detector systematic effects present in the experimental images, and rendered a more realistic comparison of the laboratory tests with the computer simulations. Plateau removal and weighted deconvolution techniques were also investigated as methods for the reduction of the coding error noise associated with the gamma ray images.

  8. A new fast fractal modeling approach for the detection of microcalcifications in mammograms.

    PubMed

    Sankar, Deepa; Thomas, Tessamma

    2010-10-01

    In this paper, a novel fast method for modeling mammograms by deterministic fractal coding approach to detect the presence of microcalcifications, which are early signs of breast cancer, is presented. The modeled mammogram obtained using fractal encoding method is visually similar to the original image containing microcalcifications, and therefore, when it is taken out from the original mammogram, the presence of microcalcifications can be enhanced. The limitation of fractal image modeling is the tremendous time required for encoding. In the present work, instead of searching for a matching domain in the entire domain pool of the image, three methods based on mean and variance, dynamic range of the image blocks, and mass center features are used. This reduced the encoding time by a factor of 3, 89, and 13, respectively, in the three methods with respect to the conventional fractal image coding method with quad tree partitioning. The mammograms obtained from The Mammographic Image Analysis Society database (ground truth available) gave a total detection score of 87.6%, 87.6%, 90.5%, and 87.6%, for the conventional and the proposed three methods, respectively.

  9. Investigating Lossy Image Coding Using the PLHaar Transform

    SciTech Connect

    Senecal, J G; Lindstrom, P; Duchaineau, M A; Joy, K I

    2004-11-16

    We developed the Piecewise-Linear Haar (PLHaar) transform, an integer wavelet-like transform. PLHaar does not have dynamic range expansion, i.e. it is an n-bit to n-bit transform. To our knowledge PLHaar is the only reversible n-bit to n-bit transform that is suitable for lossy and lossless coding. We are investigating PLHaar's use in lossy image coding. Preliminary results from thresholding transform coefficients show that PLHaar does not produce objectionable artifacts like prior n-bit to n-bit transforms, such as the transform of Chao et al. (CFH). Also, at lower bitrates PLHaar images have increased contrast. For a given set of CFH and PLHaar coefficients with equal entropy, the PLHaar reconstruction is more appealing, although the PSNR may be lower.

  10. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  11. 157km BOTDA with pulse coding and image processing

    NASA Astrophysics Data System (ADS)

    Qian, Xianyang; Wang, Zinan; Wang, Song; Xue, Naitian; Sun, Wei; Zhang, Li; Zhang, Bin; Rao, Yunjiang

    2016-05-01

    A repeater-less Brillouin optical time-domain analyzer (BOTDA) with 157.68km sensing range is demonstrated, using the combination of random fiber laser Raman pumping and low-noise laser-diode-Raman pumping. With optical pulse coding (OPC) and Non Local Means (NLM) image processing, temperature sensing with +/-0.70°C uncertainty and 8m spatial resolution is experimentally demonstrated. The image processing approach has been proved to be compatible with OPC, and it further increases the figure-of-merit (FoM) of the system by 57%.

  12. Correlated Statistical Uncertainties in Coded-Aperture Imaging

    SciTech Connect

    Fleenor, Matthew C; Blackston, Matthew A; Ziock, Klaus-Peter

    2014-01-01

    In nuclear security applications, coded-aperture imagers provide the opportu- nity for a wealth of information regarding the attributes of both the radioac- tive and non-radioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be deter- mined in a quantitative and statistically meaningful manner. To address the deficiency of quantifiable errors in coded-aperture imaging, we present uncer- tainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti- mask data are included. Utilizing simulated point source data, we found that correlations (and inverse correlations) arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations (and their inverses) was heightened by the process of over-sampling, while correla- tions were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explore how statistics-based alarming in nuclear security is impacted.

  13. A Brief Historical Introduction to Fractals and Fractal Geometry

    ERIC Educational Resources Information Center

    Debnath, Lokenath

    2006-01-01

    This paper deals with a brief historical introduction to fractals, fractal dimension and fractal geometry. Many fractals including the Cantor fractal, the Koch fractal, the Minkowski fractal, the Mandelbrot and Given fractal are described to illustrate self-similar geometrical figures. This is followed by the discovery of dynamical systems and…

  14. Statistical and fractal analysis of autofluorescent myocardium images in posthumous diagnostics of acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Garazdiuk, M.; Motrich, A. V.

    2014-08-01

    This research presents the results of investigation of laser polarization fluorescence of biological layers (histological sections of the myocardium). The polarized structure of autofluorescence imaging layers of biological tissues was detected and investigated. Proposed the model of describing the formation of polarization inhomogeneous of autofluorescence imaging biological optically anisotropic layers. On this basis, analytically and experimentally tested to justify the method of laser polarimetry autofluorescent. Analyzed the effectiveness of this method in the postmortem diagnosis of infarction. The objective criteria (statistical moments) of differentiation of autofluorescent images of histological sections myocardium were defined. The operational characteristics (sensitivity, specificity, accuracy) of these technique were determined.

  15. Block-based conditional entropy coding for medical image compression

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng

    2003-05-01

    In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.

  16. A thermal neutron source imager using coded apertures

    SciTech Connect

    Vanier, P.E.; Forman, L.; Selcow, E.C.

    1995-08-01

    To facilitate the process of re-entry vehicle on-site inspections, it would be useful to have an imaging technique which would allow the counting of deployed multiple nuclear warheads without significant disassembly of a missile`s structure. Since neutrons cannot easily be shielded without massive amounts of materials, they offer a means of imaging the separate sources inside a sealed vehicle. Thermal neutrons carry no detailed spectral information, so their detection should not be as intrusive as gamma ray imaging. A prototype device for imaging at close range with thermal neutrons has been constructed using an array of {sup 3}He position-sensitive gas proportional counters combined with a uniformly redundant coded aperture array. A sealed {sup 252}Cf source surrounded by a polyethylene moderator is used as a test source. By means of slit and pinhole experiments, count rates of image-forming neutrons (those which cast a shadow of a Cd aperture on the detector) are compared with the count rates for background neutrons. The resulting ratio, which limits the available image contrast, is measured as a function of distance from the source. The envelope of performance of the instrument is defined by the contrast ratio, the angular resolution, and the total count rate as a function of distance from the source. These factors will determine whether such an instrument could be practical as a tool for treaty verification.

  17. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  18. Fractal signatures in the aperiodic Fibonacci grating.

    PubMed

    Verma, Rupesh; Banerjee, Varsha; Senthilkumaran, Paramasivam

    2014-05-01

    The Fibonacci grating (FbG) is an archetypal example of aperiodicity and self-similarity. While aperiodicity distinguishes it from a fractal, self-similarity identifies it with a fractal. Our paper investigates the outcome of these complementary features on the FbG diffraction profile (FbGDP). We find that the FbGDP has unique characteristics (e.g., no reduction in intensity with increasing generations), in addition to fractal signatures (e.g., a non-integer fractal dimension). These make the Fibonacci architecture potentially useful in image forming devices and other emerging technologies. PMID:24784044

  19. Modern transform design for advanced image/video coding applications

    NASA Astrophysics Data System (ADS)

    Tran, Trac D.; Topiwala, Pankaj N.

    2008-08-01

    This paper offers an overall review of recent advances in the design of modern transforms for image and video coding applications. Transforms have been an integral part of signal coding applications from the beginning, but emphasis had been on true floating-point transforms for most of that history. Recently, with the proliferation of low-power handheld multimedia devices, a new vision of integer-only transforms that provide high performance yet very low complexity has quickly gained ascendency. We explore two key design approaches to creating integer transforms, and focus on a systematic, universal method based on decomposition into lifting steps, and use of (dyadic) rational coefficients. This method provides a wealth of solutions, many of which are already in use in leading media codecs today, such as H.264, HD Photo/JPEG XR, and scalable audio. We give early indications in this paper, and more fully elsewhere.

  20. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  1. Changes in fractal dimension during aggregation.

    PubMed

    Chakraborti, Rajat K; Gardner, Kevin H; Atkinson, Joseph F; Van Benschoten, John E

    2003-02-01

    Experiments were performed to evaluate temporal changes in the fractal dimension of aggregates formed during flocculation of an initially monodisperse suspension of latex microspheres. Particle size distributions and aggregate geometrical information at different mixing times were obtained using a non-intrusive optical sampling and digital image analysis technique, under variable conditions of mixing speed, coagulant (alum) dose and particle concentration. Pixel resolution required to determine aggregate size and geometric measures including the fractal dimension is discussed and a quantitative measure of accuracy is developed. The two-dimensional fractal dimension was found to range from 1.94 to 1.48, corresponding to aggregates that are either relatively compact or loosely structured, respectively. Changes in fractal dimension are explained using a conceptual model, which describes changes in fractal dimension associated with aggregate growth and changes in aggregate structure. For aggregation of an initially monodisperse suspension, the fractal dimension was found to decrease over time in the initial stages of floc formation.

  2. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  3. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  4. Coded aperture imaging with self-supporting uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.

    1983-01-01

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.

  5. Microbialites on Mars: a fractal analysis of the Athena's microscopic images

    NASA Astrophysics Data System (ADS)

    Bianciardi, G.; Rizzo, V.; Cantasano, N.

    2015-10-01

    The Mars Exploration Rovers investigated Martian plains where laminated sedimentary rocks are present. The Athena morphological investigation [1] showed microstructures organized in intertwined filaments of microspherules: a texture we have also found on samples of terrestrial (biogenic) stromatolites and other microbialites and not on pseudo-abiogenicstromatolites. We performed a quantitative image analysis in order to compare 50 microbialites images with 50 rovers (Opportunity and Spirit) ones (approximately 30,000/30,000 microstructures). Contours were extracted and morphometric indexes obtained: geometric and algorithmic complexities, entropy, tortuosity, minimum and maximum diameters. Terrestrial and Martian textures resulted multifractals. Mean values and confidence intervals from the Martian images overlapped perfectly with those from terrestrial samples. The probability of this occurring by chance was less than 1/28, p<0.004. Our work show the evidence of a widespread presence of microbialites in the Martian outcroppings: i.e., the presence of unicellular life on the ancient Mars, when without any doubt, liquid water flowed on the Red Planet.

  6. Fractal Characterization of Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Qiu, Hon-Iie; Lam, Nina Siu-Ngan; Quattrochi, Dale A.; Gamon, John A.

    1999-01-01

    Two Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral images selected from the Los Angeles area, one representing urban and the other, rural, were used to examine their spatial complexity across their entire spectrum of the remote sensing data. Using the ICAMS (Image Characterization And Modeling System) software, we computed the fractal dimension values via the isarithm and triangular prism methods for all 224 bands in the two AVIRIS scenes. The resultant fractal dimensions reflect changes in image complexity across the spectral range of the hyperspectral images. Both the isarithm and triangular prism methods detect unusually high D values on the spectral bands that fall within the atmospheric absorption and scattering zones where signature to noise ratios are low. Fractal dimensions for the urban area resulted in higher values than for the rural landscape, and the differences between the resulting D values are more distinct in the visible bands. The triangular prism method is sensitive to a few random speckles in the images, leading to a lower dimensionality. On the contrary, the isarithm method will ignore the speckles and focus on the major variation dominating the surface, thus resulting in a higher dimension. It is seen where the fractal curves plotted for the entire bandwidth range of the hyperspectral images could be used to distinguish landscape types as well as for screening noisy bands.

  7. Categorizing biomedicine images using novel image features and sparse coding representation

    PubMed Central

    2013-01-01

    Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are

  8. Fractal vector optical fields.

    PubMed

    Pan, Yue; Gao, Xu-Zhen; Cai, Meng-Qiang; Zhang, Guan-Lin; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2016-07-15

    We introduce the concept of a fractal, which provides an alternative approach for flexibly engineering the optical fields and their focal fields. We propose, design, and create a new family of optical fields-fractal vector optical fields, which build a bridge between the fractal and vector optical fields. The fractal vector optical fields have polarization states exhibiting fractal geometry, and may also involve the phase and/or amplitude simultaneously. The results reveal that the focal fields exhibit self-similarity, and the hierarchy of the fractal has the "weeding" role. The fractal can be used to engineer the focal field. PMID:27420485

  9. Magnetohydrodynamics of fractal media

    SciTech Connect

    Tarasov, Vasily E.

    2006-05-15

    The fractal distribution of charged particles is considered. An example of this distribution is the charged particles that are distributed over the fractal. The fractional integrals are used to describe fractal distribution. These integrals are considered as approximations of integrals on fractals. Typical turbulent media could be of a fractal structure and the corresponding equations should be changed to include the fractal features of the media. The magnetohydrodynamics equations for fractal media are derived from the fractional generalization of integral Maxwell equations and integral hydrodynamics (balance) equations. Possible equilibrium states for these equations are considered.

  10. Predictive coding of depth images across multiple views

    NASA Astrophysics Data System (ADS)

    Morvan, Yannick; Farin, Dirk; de With, Peter H. N.

    2007-02-01

    A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictivecoding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 dB as compared to H.264 compression.

  11. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  12. MORPH-I (Ver 1.0) a software package for the analysis of scanning electron micrograph (binary formatted) images for the assessment of the fractal dimension of enclosed pore surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert

    1998-01-01

    MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.

  13. Source-channel optimized trellis codes for bitonal image transmission over AWGN channels.

    PubMed

    Kroll, J M; Phamdo, N

    1999-01-01

    We consider the design of trellis codes for transmission of binary images over additive white Gaussian noise (AWGN) channels. We first model the image as a binary asymmetric Markov source (BAMS) and then design source-channel optimized (SCO) trellis codes for the BAMS and AWGN channel. The SCO codes are shown to be superior to Ungerboeck's codes by approximately 1.1 dB (64-state code, 10(-5) bit error probability), We also show that a simple "mapping conversion" method can be used to improve the performance of Ungerboeck's codes by approximately 0.4 dB (also 64-state code and 10 (-5) bit error probability). We compare the proposed SCO system with a traditional tandem system consisting of a Huffman code, a convolutional code, an interleaver, and an Ungerboeck trellis code. The SCO system significantly outperforms the tandem system. Finally, using a facsimile image, we compare the image quality of an SCO code, an Ungerboeck code, and the tandem code, The SCO code yields the best reconstructed image quality at 4-5 dB channel SNR.

  14. The application of coded excitation technology in medical ultrasonic Doppler imaging

    NASA Astrophysics Data System (ADS)

    Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin

    2008-03-01

    Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.

  15. Multiple-spark photography with image separation by color coding.

    PubMed

    Kent, J C

    1969-05-01

    The application of color-coded image separation to multiple-spark photography provides a simple method for recording shadow or schlieren photographs with arbitrary magnification, extremely high frame rate, short exposure time per frame, and no parallax. A color separation, multiple-spark camera is described which produces a sequence of three frames at 5 x magnification with a maximum frame rate of 10(6)/sec and an exposure time per frame of about 0.3 microsec. Standard 10.16-cm x 12.7-cm (4-in. x 5-in.) color film is used. This camera has been useful for observing liquid atomization processes and spray motion, since it enables direct measurement droplet size, location, velocity, and deceleration. PMID:20072366

  16. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  17. Fractals in art and nature: why do we like them?

    NASA Astrophysics Data System (ADS)

    Spehar, Branka; Taylor, Richard P.

    2013-03-01

    Fractals have experienced considerable success in quantifying the visual complexity exhibited by many natural patterns, and continue to capture the imagination of scientists and artists alike. Fractal patterns have also been noted for their aesthetic appeal, a suggestion further reinforced by the discovery that the poured patterns of the American abstract painter Jackson Pollock are also fractal, together with the findings that many forms of art resemble natural scenes in showing scale-invariant, fractal-like properties. While some have suggested that fractal-like patterns are inherently pleasing because they resemble natural patterns and scenes, the relation between the visual characteristics of fractals and their aesthetic appeal remains unclear. Motivated by our previous findings that humans display a consistent preference for a certain range of fractal dimension across fractal images of various types we turn to scale-specific processing of visual information to understand this relationship. Whereas our previous preference studies focused on fractal images consisting of black shapes on white backgrounds, here we extend our investigations to include grayscale images in which the intensity variations exhibit scale invariance. This scale-invariance is generated using a 1/f frequency distribution and can be tuned by varying the slope of the rotationally averaged Fourier amplitude spectrum. Thresholding the intensity of these images generates black and white fractals with equivalent scaling properties to the original grayscale images, allowing a direct comparison of preferences for grayscale and black and white fractals. We found no significant differences in preferences between the two groups of fractals. For both set of images, the visual preference peaked for images with the amplitude spectrum slopes from 1.25 to 1.5, thus confirming and extending the previously observed relationship between fractal characteristics of images and visual preference.

  18. Simulation model of the fractal patterns in ionic conducting polymer films

    NASA Astrophysics Data System (ADS)

    Amir, Shahizat; Mohamed, Nor; Hashim Ali, Siti

    2010-02-01

    Normally polymer electrolyte membranes are prepared and studied for applications in electrochemical devices. In this work, polymer electrolyte membranes have been used as the media to culture fractals. In order to simulate the growth patterns and stages of the fractals, a model has been identified based on the Brownian motion theory. A computer coding has been developed for the model to simulate and visualize the fractal growth. This computer program has been successful in simulating the growth of the fractal and in calculating the fractal dimension of each of the simulated fractal patterns. The fractal dimensions of the simulated fractals are comparable with the values obtained in the original fractals observed in the polymer electrolyte membrane. This indicates that the model developed in the present work is within acceptable conformity with the original fractal.

  19. Requirements for imaging vulnerable plaque in the coronary artery using a coded aperture imaging system

    NASA Astrophysics Data System (ADS)

    Tozian, Cynthia

    A coded aperture1 plate was employed on a conventional gamma camera for 3D single photon emission computed tomography (SPECT) imaging on small animal models. The coded aperture design was selected to improve the spatial resolution and decrease the minimum detectable activity (MDA) required to image plaque formation in the APoE (apolipoprotein E) gene deficient mouse model when compared to conventional SPECT techniques. The pattern that was tested was a no-two-holes-touching (NTHT) modified uniformly redundant array (MURA) having 1,920 pinholes. The number of pinholes combined with the thin sintered tungsten plate was designed to increase the efficiency of the imaging modality over conventional gamma camera imaging methods while improving spatial resolution and reducing noise in the image reconstruction. The MDA required to image the vulnerable plaque in a human cardiac-torso mathematical phantom was simulated with a Monte Carlo code and evaluated to determine the optimum plate thickness by a receiver operating characteristic (ROC) yielding the lowest possible MDA and highest area under the curve (AUC). A partial 3D expectation maximization (EM) reconstruction was developed to improve signal-to-noise ratio (SNR), dynamic range, and spatial resolution over the linear correlation method of reconstruction. This improvement was evaluated by imaging a mini hot rod phantom, simulating the dynamic range, and by performing a bone scan of the C-57 control mouse. Results of the experimental and simulated data as well as other plate designs were analyzed for use as a small animal and potentially human cardiac imaging modality for a radiopharmaceutical developed at Bristol-Myers Squibb Medical Imaging Company, North Billerica, MA, for diagnosing vulnerable plaques. If left untreated, these plaques may rupture causing sudden, unexpected coronary occlusion and death. The results of this research indicated that imaging and reconstructing with this new partial 3D algorithm improved

  20. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  1. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  2. A Probabilistic Analysis of Sparse Coded Feature Pooling and Its Application for Image Retrieval

    PubMed Central

    Zhang, Yunchao; Chen, Jing; Huang, Xiujie; Wang, Yongtian

    2015-01-01

    Feature coding and pooling as a key component of image retrieval have been widely studied over the past several years. Recently sparse coding with max-pooling is regarded as the state-of-the-art for image classification. However there is no comprehensive study concerning the application of sparse coding for image retrieval. In this paper, we first analyze the effects of different sampling strategies for image retrieval, then we discuss feature pooling strategies on image retrieval performance with a probabilistic explanation in the context of sparse coding framework, and propose a modified sum pooling procedure which can improve the retrieval accuracy significantly. Further we apply sparse coding method to aggregate multiple types of features for large-scale image retrieval. Extensive experiments on commonly-used evaluation datasets demonstrate that our final compact image representation improves the retrieval accuracy significantly. PMID:26132080

  3. Diffusion-Weighted Imaging with Color-Coded Images: Towards a Reduction in Reading Time While Keeping a Similar Accuracy.

    PubMed

    Campos Kitamura, Felipe; de Medeiros Alves, Srhael; Antônio Tobaru Tibana, Luis; Abdala, Nitamar

    2016-01-01

    The aim of this study was to develop a diagnostic tool capable of providing diffusion and apparent diffusion coefficient (ADC) map information in a single color-coded image and to assess the performance of color-coded images compared with their corresponding diffusion and ADC map. The institutional review board approved this retrospective study, which sequentially enrolled 36 head MRI scans. Diffusion-weighted images (DWI) and ADC maps were compared to their corresponding color-coded images. Four raters had their interobserver agreement measured for both conventional (DWI) and color-coded images. Differences between conventional and color-coded images were also estimated for each of the 4 raters. Cohen's kappa and percent agreement were used. Also, paired-samples t-test was used to compare reading time for rater 1. Conventional and color-coded images had substantial or almost perfect agreement for all raters. Mean reading time of rater 1 was 47.4 seconds for DWI and 27.9 seconds for color-coded images (P = .00007). These findings are important because they support the role of color-coded images as being equivalent to that of the conventional DWI in terms of diagnostic capability. Reduction in reading time (which makes the reading easier) is also demonstrated for one rater in this study.

  4. The validity of ICD codes coupled with imaging procedure codes for identifying acute venous thromboembolism using administrative data.

    PubMed

    Alotaibi, Ghazi S; Wu, Cynthia; Senthilselvan, Ambikaipakan; McMurtry, M Sean

    2015-08-01

    The purpose of this study was to evaluate the accuracy of using a combination of International Classification of Diseases (ICD) diagnostic codes and imaging procedure codes for identifying deep vein thrombosis (DVT) and pulmonary embolism (PE) within administrative databases. Information from the Alberta Health (AH) inpatients and ambulatory care administrative databases in Alberta, Canada was obtained for subjects with a documented imaging study result performed at a large teaching hospital in Alberta to exclude venous thromboembolism (VTE) between 2000 and 2010. In 1361 randomly-selected patients, the proportion of patients correctly classified by AH administrative data, using both ICD diagnostic codes and procedure codes, was determined for DVT and PE using diagnoses documented in patient charts as the gold standard. Of the 1361 patients, 712 had suspected PE and 649 had suspected DVT. The sensitivities for identifying patients with PE or DVT using administrative data were 74.83% (95% confidence interval [CI]: 67.01-81.62) and 75.24% (95% CI: 65.86-83.14), respectively. The specificities for PE or DVT were 91.86% (95% CI: 89.29-93.98) and 95.77% (95% CI: 93.72-97.30), respectively. In conclusion, when coupled with relevant imaging codes, VTE diagnostic codes obtained from administrative data provide a relatively sensitive and very specific method to ascertain acute VTE. PMID:25834115

  5. Implementation of a foveated image coding system for image bandwidth reduction

    NASA Astrophysics Data System (ADS)

    Kortum, Philip; Geisler, Wilson S.

    1996-04-01

    We have developed a preliminary version of a foveated imaging system, implemented on a general purpose computer, which greatly reduces the transmission bandwidth of images. The system is based on the fact that the spatial resolution of the human eye is space variant, decreasing with increasing eccentricity from the point of gaze. By taking advantage of this fact, it is possible to create an image that is almost perceptually indistinguishable from a constant resolution image, but requires substantially less information to code it. This is accomplished by degrading the resolution of the image so that it matches the space-variant degradation in the resolution of the human eye. Eye movements are recorded so that the high resolution region of the image can be kept aligned with the high resolution region of the human visual system. This system has demonstrated that significant reductions in bandwidth can be achieved while still maintaining access to high detail at any point in an image. The system has been tested using 256 by 256 8 bit gray scale images with a 20 degree field-of-view and eye-movement update rates of 30 Hz (display refresh was 60 Hz). users of the system have reported minimal perceptual artifacts at bandwidth reductions of up to 94.7% (a factor of 18.8). Bandwidth reduction factors of over 100 are expected once lossless compression techniques are added to the system.

  6. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  7. Fractal-based description of natural scenes.

    PubMed

    Pentland, A P

    1984-06-01

    This paper addresses the problems of 1) representing natural shapes such as mountains, trees, and clouds, and 2) computing their description from image data. To solve these problems, we must be able to relate natural surfaces to their images; this requires a good model of natural surface shapes. Fractal functions are a good choice for modeling 3-D natural surfaces because 1) many physical processes produce a fractal surface shape, 2) fractals are widely used as a graphics tool for generating natural-looking shapes, and 3) a survey of natural imagery has shown that the 3-D fractal surface model, transformed by the image formation process, furnishes an accurate description of both textured and shaded image regions. The 3-D fractal model provides a characterization of 3-D surfaces and their images for which the appropriateness of the model is verifiable. Furthermore, this characterization is stable over transformations of scale and linear transforms of intensity. The 3-D fractal model has been successfully applied to the problems of 1) texture segmentation and classification, 2) estimation of 3-D shape information, and 3) distinguishing between perceptually ``smooth'' and perceptually ``textured'' surfaces in the scene.

  8. Fractals for physicians.

    PubMed

    Thamrin, Cindy; Stern, Georgette; Frey, Urs

    2010-06-01

    There is increasing interest in the study of fractals in medicine. In this review, we provide an overview of fractals, of techniques available to describe fractals in physiological data, and we propose some reasons why a physician might benefit from an understanding of fractals and fractal analysis, with an emphasis on paediatric respiratory medicine where possible. Among these reasons are the ubiquity of fractal organisation in nature and in the body, and how changes in this organisation over the lifespan provide insight into development and senescence. Fractal properties have also been shown to be altered in disease and even to predict the risk of worsening of disease. Finally, implications of a fractal organisation include robustness to errors during development, ability to adapt to surroundings, and the restoration of such organisation as targets for intervention and treatment.

  9. Chaos, Fractals, and Polynomials.

    ERIC Educational Resources Information Center

    Tylee, J. Louis; Tylee, Thomas B.

    1996-01-01

    Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)

  10. Mask design and fabrication in coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Shutler, Paul M. E.; Springham, Stuart V.; Talebitaher, Alireza

    2013-05-01

    We introduce the new concept of a row-spaced mask, where a number of blank rows are interposed between every pair of adjacent rows of holes of a conventional cyclic difference set based coded mask. At the cost of a small loss in signal-to-noise ratio, this can substantially reduce the number of holes required to image extended sources, at the same time increasing mask strength uniformly across the aperture, as well as making the mask automatically self-supporting. We also show that the Finger and Prince construction can be used to wrap any cyclic difference set onto a two-dimensional mask, regardless of the number of its pixels. We use this construction to validate by means of numerical simulations not only the performance of row-spaced masks, but also the pixel padding technique introduced by in 't Zand. Finally, we provide a computer program CDSGEN.EXE which, on a fast modern computer and for any Singer set of practical size and open fraction, generates the corresponding pattern of holes in seconds.

  11. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    SciTech Connect

    Zhang, F. N.; Hu, H. S. Wang, D. M.; Jia, J.; Zhang, T. K.; Jia, Q. G.

    2015-12-15

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  12. Wireless image transmission using turbo codes and optimal unequal error protection.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2005-11-01

    A novel image transmission scheme is proposed for the communication of set partitioning in hierarchical trees image streams over wireless channels. The proposed scheme employs turbo codes and Reed-Solomon codes in order to deal effectively with burst errors. An algorithm for the optimal unequal error protection of the compressed bitstream is also proposed and applied in conjunction with an inherently more efficient technique for product code decoding. The resulting scheme is tested for the transmission of images over wireless channels. Experimental evaluation clearly demonstrates the superiority of the proposed transmission system in comparison to well-known robust coding schemes.

  13. Fractal-based wideband invisibility cloak

    NASA Astrophysics Data System (ADS)

    Cohen, Nathan; Okoro, Obinna; Earle, Dan; Salkind, Phil; Unger, Barry; Yen, Sean; McHugh, Daniel; Polterzycki, Stefan; Shelman-Cohen, A. J.

    2015-03-01

    A wideband invisibility cloak (IC) at microwave frequencies is described. Using fractal resonators in closely spaced (sub wavelength) arrays as a minimal number of cylindrical layers (rings), the IC demonstrates that it is physically possible to attain a `see through' cloaking device with: (a) wideband coverage; (b) simple and attainable fabrication; (c) high fidelity emulation of the free path; (d) minimal side scattering; (d) a near absence of shadowing in the scattering. Although not a practical device, this fractal-enabled technology demonstrator opens up new opportunities for diverted-image (DI) technology and use of fractals in wideband optical, infrared, and microwave applications.

  14. Distributed image coding for digital image recovery from the print-scan channel.

    PubMed

    Samadani, Ramin; Mukherjee, Debargha

    2010-03-01

    A printed digital photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a method for approximating the original digital image by combining a scan of the printed photograph with digital auxiliary information kept together with the print. We formulate and solve the approximation problem using a Wyner-Ziv coding framework. During encoding, the Wyner-Ziv auxiliary information consists of a small amount of digital data composed of a number of sampled luminance pixel blocks and a number of sampled color pixel values to enable subsequent accurate registration and color-reproduction during decoding. The registration and color information is augmented by an additional amount of digital data encoded using Wyner-Ziv coding techniques that recovers residual errors and lost high spatial frequencies. The decoding process consists of scanning the printed photograph, together with a two step decoding process. The first decoding step, using the registration and color auxiliary information, generates a side-information image which registers and color corrects the scanned image. The second decoding step uses the additional Wyner-Ziv layer together with the side-information image to provide a closer approximation of the original, reducing residual errors and restoring the lost high spatial frequencies. The experimental results confirm the reduced digital storage needs when the scanned print assists in the digital reconstruction.

  15. Fractals in the Classroom

    ERIC Educational Resources Information Center

    Fraboni, Michael; Moller, Trisha

    2008-01-01

    Fractal geometry offers teachers great flexibility: It can be adapted to the level of the audience or to time constraints. Although easily explained, fractal geometry leads to rich and interesting mathematical complexities. In this article, the authors describe fractal geometry, explain the process of iteration, and provide a sample exercise.…

  16. Characterizing Hyperspectral Imagery (AVIRIS) Using Fractal Technique

    NASA Technical Reports Server (NTRS)

    Qiu, Hong-Lie; Lam, Nina Siu-Ngan; Quattrochi, Dale

    1997-01-01

    With the rapid increase in hyperspectral data acquired by various experimental hyperspectral imaging sensors, it is necessary to develop efficient and innovative tools to handle and analyze these data. The objective of this study is to seek effective spatial analytical tools for summarizing the spatial patterns of hyperspectral imaging data. In this paper, we (1) examine how fractal dimension D changes across spectral bands of hyperspectral imaging data and (2) determine the relationships between fractal dimension and image content. It has been documented that fractal dimension changes across spectral bands for the Landsat-TM data and its value [(D)] is largely a function of the complexity of the landscape under study. The newly available hyperspectral imaging data such as that from the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) which has 224 bands, covers a wider spectral range with a much finer spectral resolution. Our preliminary result shows that fractal dimension values of AVIRIS scenes from the Santa Monica Mountains in California vary between 2.25 and 2.99. However, high fractal dimension values (D > 2.8) are found only from spectral bands with high noise level and bands with good image quality have a fairly stable dimension value (D = 2.5 - 2.6). This suggests that D can also be used as a summary statistics to represent the image quality or content of spectral bands.

  17. A comparison of the fractal and JPEG algorithms

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Shahshahani, M.

    1991-01-01

    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.

  18. Optical fractal synthesizer - Concept and experimental verification

    NASA Astrophysics Data System (ADS)

    Tanida, Jun; Uemoto, Atsushi; Ichioka, Yoshiki

    1993-02-01

    Generation of fractal images with an iterated function system (IFS) (Barnsley, 1988) that can be easily implemented using optical techniques is considered. An optical fractal synthesizer (OFS) is described which is capable of effectively computing the iterated function systems taking advantage of optical processing in data continuity and parallelism. An experimental system based on two optical subsystems for affine transformation and a TV-feedback line has been constructed to demonstrate the processing capability of the OFS.

  19. Coded illumination for motion-blur free imaging of cells on cell-phone based imaging flow cytometer

    NASA Astrophysics Data System (ADS)

    Saxena, Manish; Gorthi, Sai Siva

    2014-10-01

    Cell-phone based imaging flow cytometry can be realized by flowing cells through the microfluidic devices, and capturing their images with an optically enhanced camera of the cell-phone. Throughput in flow cytometers is usually enhanced by increasing the flow rate of cells. However, maximum frame rate of camera system limits the achievable flow rate. Beyond this, the images become highly blurred due to motion-smear. We propose to address this issue with coded illumination, which enables recovery of high-fidelity images of cells far beyond their motion-blur limit. This paper presents simulation results of deblurring the synthetically generated cell/bead images under such coded illumination.

  20. Quantitative evaluation of midpalatal suture maturation via fractal analysis

    PubMed Central

    Kwak, Kyoung Ho; Kim, Yong-Il; Kim, Yong-Deok

    2016-01-01

    Objective The purpose of this study was to determine whether the results of fractal analysis can be used as criteria for midpalatal suture maturation evaluation. Methods The study included 131 subjects aged over 18 years of age (range 18.1–53.4 years) who underwent cone-beam computed tomography. Skeletonized images of the midpalatal suture were obtained via image processing software and used to calculate fractal dimensions. Correlations between maturation stage and fractal dimensions were calculated using Spearman's correlation coefficient. Optimal fractal dimension cut-off values were determined using a receiver operating characteristic curve. Results The distribution of maturation stages of the midpalatal suture according to the cervical vertebrae maturation index was highly variable, and there was a strong negative correlation between maturation stage and fractal dimension (−0.623, p < 0.001). Fractal dimension was a statistically significant indicator of dichotomous results with regard to maturation stage (area under curve = 0.794, p < 0.001). A test in which fractal dimension was used to predict the resulting variable that splits maturation stages into ABC and D or E yielded an optimal fractal dimension cut-off value of 1.0235. Conclusions There was a strong negative correlation between fractal dimension and midpalatal suture maturation. Fractal analysis is an objective quantitative method, and therefore we suggest that it may be useful for the evaluation of midpalatal suture maturation. PMID:27668195

  1. Quantitative evaluation of midpalatal suture maturation via fractal analysis

    PubMed Central

    Kwak, Kyoung Ho; Kim, Yong-Il; Kim, Yong-Deok

    2016-01-01

    Objective The purpose of this study was to determine whether the results of fractal analysis can be used as criteria for midpalatal suture maturation evaluation. Methods The study included 131 subjects aged over 18 years of age (range 18.1–53.4 years) who underwent cone-beam computed tomography. Skeletonized images of the midpalatal suture were obtained via image processing software and used to calculate fractal dimensions. Correlations between maturation stage and fractal dimensions were calculated using Spearman's correlation coefficient. Optimal fractal dimension cut-off values were determined using a receiver operating characteristic curve. Results The distribution of maturation stages of the midpalatal suture according to the cervical vertebrae maturation index was highly variable, and there was a strong negative correlation between maturation stage and fractal dimension (−0.623, p < 0.001). Fractal dimension was a statistically significant indicator of dichotomous results with regard to maturation stage (area under curve = 0.794, p < 0.001). A test in which fractal dimension was used to predict the resulting variable that splits maturation stages into ABC and D or E yielded an optimal fractal dimension cut-off value of 1.0235. Conclusions There was a strong negative correlation between fractal dimension and midpalatal suture maturation. Fractal analysis is an objective quantitative method, and therefore we suggest that it may be useful for the evaluation of midpalatal suture maturation.

  2. Reduction of artefacts and noise for a wavefront coding athermalized infrared imaging system

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Zhang, Xiaodong; Shi, Zelin; Xu, Baoshu; Zhang, Chengshuo

    2016-07-01

    Because of obvious drawbacks including serious artefacts and noise in a decoded image, the existing wavefront coding infrared imaging systems are seriously restricted in application. The proposed ultra-precision diamond machining technique manufactures an optical phase mask with a form manufacturing errors of approximately 770 nm and a surface roughness value Ra of 5.44 nm. The proposed decoding method outperforms the classical Wiener filtering method in three indices of mean square errors, mean structural similarity index and noise equivalent temperature difference. Based on the results mentioned above and a basic principle of wavefront coding technique, this paper further develops a wavefront coding infrared imaging system. Experimental results prove that our wavefront coding infrared imaging system yields a decoded image with good quality over a temperature range from ‑40 °C to +70 °C.

  3. Large-field electron imaging and X-ray elemental mapping unveil the morphology, structure, and fractal features of a Cretaceous fossil at the centimeter scale.

    PubMed

    Oliveira, Naiara C; Silva, João H; Barros, Olga A; Pinheiro, Allysson P; Santana, William; Saraiva, Antônio A F; Ferreira, Odair P; Freire, Paulo T C; Paula, Amauri J

    2015-10-01

    We used here a scanning electron microscopy approach that detected backscattered electrons (BSEs) and X-rays (from ionization processes) along a large-field (LF) scan, applied on a Cretaceous fossil of a shrimp (area ∼280 mm(2)) from the Araripe Sedimentary Basin. High-definition LF images from BSEs and X-rays were essentially generated by assembling thousands of magnified images that covered the whole area of the fossil, thus unveiling morphological and compositional aspects at length scales from micrometers to centimeters. Morphological features of the shrimp such as pleopods, pereopods, and antennae located at near-surface layers (undetected by photography techniques) were unveiled in detail by LF BSE images and in calcium and phosphorus elemental maps (mineralized as hydroxyapatite). LF elemental maps for zinc and sulfur indicated a rare fossilization event observed for the first time in fossils from the Araripe Sedimentary Basin: the mineralization of zinc sulfide interfacing to hydroxyapatite in the fossil. Finally, a dimensional analysis of the phosphorus map led to an important finding: the existence of a fractal characteristic (D = 1.63) for the hydroxyapatite-matrix interface, a result of physical-geological events occurring with spatial scale invariance on the specimen, over millions of years.

  4. A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy

    NASA Technical Reports Server (NTRS)

    Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.

    1985-01-01

    A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.

  5. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  6. Multi-frame image super resolution based on sparse coding.

    PubMed

    Kato, Toshiyuki; Hino, Hideitsu; Murata, Noboru

    2015-06-01

    An image super-resolution method from multiple observation of low-resolution images is proposed. The method is based on sub-pixel accuracy block matching for estimating relative displacements of observed images, and sparse signal representation for estimating the corresponding high-resolution image, where correspondence between high- and low-resolution images are modeled by a certain degradation process. Relative displacements of small patches of observed low-resolution images are accurately estimated by a computationally efficient block matching method. The matching scores of the block matching are used to select a subset of low-resolution patches for reconstructing a high-resolution patch, that is, an adaptive selection of informative low-resolution images is realized. The proposed method is shown to perform comparable or superior to conventional super-resolution methods through experiments using various images.

  7. Edges of Saturn's rings are fractal.

    PubMed

    Li, Jun; Ostoja-Starzewski, Martin

    2015-01-01

    The images recently sent by the Cassini spacecraft mission (on the NASA website http://saturn.jpl.nasa.gov/photos/halloffame/) show the complex and beautiful rings of Saturn. Over the past few decades, various conjectures were advanced that Saturn's rings are Cantor-like sets, although no convincing fractal analysis of actual images has ever appeared. Here we focus on four images sent by the Cassini spacecraft mission (slide #42 "Mapping Clumps in Saturn's Rings", slide #54 "Scattered Sunshine", slide #66 taken two weeks before the planet's Augus't 200'9 equinox, and slide #68 showing edge waves raised by Daphnis on the Keeler Gap) and one image from the Voyager 2' mission in 1981. Using three box-counting methods, we determine the fractal dimension of edges of rings seen here to be consistently about 1.63 ~ 1.78. This clarifies in what sense Saturn's rings are fractal. PMID:25883885

  8. Edges of Saturn's rings are fractal.

    PubMed

    Li, Jun; Ostoja-Starzewski, Martin

    2015-01-01

    The images recently sent by the Cassini spacecraft mission (on the NASA website http://saturn.jpl.nasa.gov/photos/halloffame/) show the complex and beautiful rings of Saturn. Over the past few decades, various conjectures were advanced that Saturn's rings are Cantor-like sets, although no convincing fractal analysis of actual images has ever appeared. Here we focus on four images sent by the Cassini spacecraft mission (slide #42 "Mapping Clumps in Saturn's Rings", slide #54 "Scattered Sunshine", slide #66 taken two weeks before the planet's Augus't 200'9 equinox, and slide #68 showing edge waves raised by Daphnis on the Keeler Gap) and one image from the Voyager 2' mission in 1981. Using three box-counting methods, we determine the fractal dimension of edges of rings seen here to be consistently about 1.63 ~ 1.78. This clarifies in what sense Saturn's rings are fractal.

  9. Fractal analysis of DNA sequence data

    SciTech Connect

    Berthelsen, C.L.

    1993-01-01

    DNA sequence databases are growing at an almost exponential rate. New analysis methods are needed to extract knowledge about the organization of nucleotides from this vast amount of data. Fractal analysis is a new scientific paradigm that has been used successfully in many domains including the biological and physical sciences. Biological growth is a nonlinear dynamic process and some have suggested that to consider fractal geometry as a biological design principle may be most productive. This research is an exploratory study of the application of fractal analysis to DNA sequence data. A simple random fractal, the random walk, is used to represent DNA sequences. The fractal dimension of these walks is then estimated using the [open quote]sandbox method[close quote]. Analysis of 164 human DNA sequences compared to three types of control sequences (random, base-content matched, and dimer-content matched) reveals that long-range correlations are present in DNA that are not explained by base or dimer frequencies. The study also revealed that the fractal dimension of coding sequences was significantly lower than sequences that were primarily noncoding, indicating the presence of longer-range correlations in functional sequences. The multifractal spectrum is used to analyze fractals that are heterogeneous and have a different fractal dimension for subsets with different scalings. The multifractal spectrum of the random walks of twelve mitochondrial genome sequences was estimated. Eight vertebrate mtDNA sequences had uniformly lower spectra values than did four invertebrate mtDNA sequences. Thus, vertebrate mitochondria show significantly longer-range correlations than to invertebrate mitochondria. The higher multifractal spectra values for invertebrate mitochondria suggest a more random organization of the sequences. This research also includes considerable theoretical work on the effects of finite size, embedding dimension, and scaling ranges.

  10. Fractal Analysis of DNA Sequence Data

    NASA Astrophysics Data System (ADS)

    Berthelsen, Cheryl Lynn

    DNA sequence databases are growing at an almost exponential rate. New analysis methods are needed to extract knowledge about the organization of nucleotides from this vast amount of data. Fractal analysis is a new scientific paradigm that has been used successfully in many domains including the biological and physical sciences. Biological growth is a nonlinear dynamic process and some have suggested that to consider fractal geometry as a biological design principle may be most productive. This research is an exploratory study of the application of fractal analysis to DNA sequence data. A simple random fractal, the random walk, is used to represent DNA sequences. The fractal dimension of these walks is then estimated using the "sandbox method." Analysis of 164 human DNA sequences compared to three types of control sequences (random, base -content matched, and dimer-content matched) reveals that long-range correlations are present in DNA that are not explained by base or dimer frequencies. The study also revealed that the fractal dimension of coding sequences was significantly lower than sequences that were primarily noncoding, indicating the presence of longer-range correlations in functional sequences. The multifractal spectrum is used to analyze fractals that are heterogeneous and have a different fractal dimension for subsets with different scalings. The multifractal spectrum of the random walks of twelve mitochondrial genome sequences was estimated. Eight vertebrate mtDNA sequences had uniformly lower spectra values than did four invertebrate mtDNA sequences. Thus, vertebrate mitochondria show significantly longer-range correlations than do invertebrate mitochondria. The higher multifractal spectra values for invertebrate mitochondria suggest a more random organization of the sequences. This research also includes considerable theoretical work on the effects of finite size, embedding dimension, and scaling ranges.

  11. Hands-On Fractals and the Unexpected in Mathematics

    ERIC Educational Resources Information Center

    Gluchoff, Alan

    2006-01-01

    This article describes a hands-on project in which unusual fractal images are produced using only a photocopy machine and office supplies. The resulting images are an example of the contraction mapping principle.

  12. Experimental implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    NASA Astrophysics Data System (ADS)

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2015-03-01

    A fast and accurate scatter imaging technique to differentiate cancerous and healthy breast tissue is introduced in this work. Such a technique would have wide-ranging clinical applications from intra-operative margin assessment to breast cancer screening. Coherent Scatter Computed Tomography (CSCT) has been shown to differentiate cancerous from healthy tissue, but the need to raster scan a pencil beam at a series of angles and slices in order to reconstruct 3D images makes it prohibitively time consuming. In this work we apply the coded aperture coherent scatter spectral imaging technique to reconstruct 3D images of breast tissue samples from experimental data taken without the rotation usually required in CSCT. We present our experimental implementation of coded aperture scatter imaging, the reconstructed images of the breast tissue samples and segmentations of the 3D images in order to identify the cancerous and healthy tissue inside of the samples. We find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside of them. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside of ex vivo samples within a time on the order of a minute.

  13. Exploring Fractals in the Classroom.

    ERIC Educational Resources Information Center

    Naylor, Michael

    1999-01-01

    Describes an activity involving six investigations. Introduces students to fractals, allows them to study the properties of some famous fractals, and encourages them to create their own fractal artwork. Contains 14 references. (ASK)

  14. Fractals: To Know, to Do, to Simulate.

    ERIC Educational Resources Information Center

    Talanquer, Vicente; Irazoque, Glinda

    1993-01-01

    Discusses the development of fractal theory and suggests fractal aggregates as an attractive alternative for introducing fractal concepts. Describes methods for producing metallic fractals and a computer simulation for drawing fractals. (MVL)

  15. MTF analysis for coded aperture imaging in a flat panel display

    NASA Astrophysics Data System (ADS)

    Suh, Sungjoo; Han, Jae-Joon; Park, Dusik

    2014-09-01

    In this paper, we analyze the modulation transfer function (MTF) of coded aperture imaging in a flat panel display. The flat panel display with a sensor panel forms lens-less multi-view cameras through the imaging pattern of the modified redundant arrays (MURA) on the display panel. To analyze the MTF of the coded aperture imaging implemented on the display panel, we first mathematically model the encoding process of coded aperture imaging, where the projected image on the sensor panel is modeled as a convolution of the scaled object and a function of the imaging pattern. Then, system point spread function is determined by incorporating a decoding process which is dependent on the pixel pitch of the display screen and the decoding function. Finally, the MTF of the system is derived by the magnitude of the Fourier transform of the determined system point spread function. To demonstrate the validity of the mathematically derived MTF in the system, we build a coded aperture imaging system that can capture the scene in front of the display, where the system consists of a display screen and a sensor panel. Experimental results show that the derived MTF of coded aperture imaging in a flat panel display system well corresponds to the measured MTF.

  16. Hierarchical segmentation-based image coding using hybrid quad-binary trees.

    PubMed

    Kassim, Ashraf A; Lee, Wei Siong; Zonoobi, Dornoosh

    2009-06-01

    A novel segmentation-based image approximation and coding technique is proposed. A hybrid quad-binary (QB) tree structure is utilized to efficiently model and code geometrical information within images. Compared to other tree-based representation such as wedgelets, the proposed QB-tree based method is more efficient for a wide range of contour features such as junctions, corners and ridges, especially at low bit rates.

  17. GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION

    NASA Astrophysics Data System (ADS)

    Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič

    2015-10-01

    In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.

  18. Fractal dimension analyses of lava surfaces and flow boundaries

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.

    1993-01-01

    An improved method of estimating fractal surface dimensions has been developed. The accuracy of this method is illustrated using artificially generated fractal surfaces. A slightly different from usual concept of linear dimension is developed, allowing a direct link between that and the corresponding surface dimension estimate. These methods are applied to a series of images of lava flows, representing a variety of physical and chemical conditions. These include lavas from California, Idaho, and Hawaii, as well as some extraterrestrial flows. The fractal surface dimension estimations are presented, as well as the fractal line dimensions where appropriate.

  19. Automatic detection of microcalcifications with multi-fractal spectrum.

    PubMed

    Ding, Yong; Dai, Hang; Zhang, Hang

    2014-01-01

    For improving the detection of micro-calcifications (MCs), this paper proposes an automatic detection of MC system making use of multi-fractal spectrum in digitized mammograms. The approach of automatic detection system is based on the principle that normal tissues possess certain fractal properties which change along with the presence of MCs. In this system, multi-fractal spectrum is applied to reveal such fractal properties. By quantifying the deviations of multi-fractal spectrums between normal tissues and MCs, the system can identify MCs altering the fractal properties and finally locate the position of MCs. The performance of the proposed system is compared with the leading automatic detection systems in a mammographic image database. Experimental results demonstrate that the proposed system is statistically superior to most of the compared systems and delivers a superior performance. PMID:25227013

  20. Fractals and fragmentation

    NASA Technical Reports Server (NTRS)

    Turcotte, D. L.

    1986-01-01

    The use of renormalization group techniques on fragmentation problems is examined. The equations which represent fractals and the size-frequency distributions of fragments are presented. Method for calculating the size distributions of asteriods and meteorites are described; the frequency-mass distribution for these interplanetary objects are due to fragmentation. The application of two renormalization group models to fragmentation is analyzed. It is observed that the models yield a fractal behavior for fragmentation; however, different values for the fractal dimension are produced . It is concluded that fragmentation is a scale invariant process and that the fractal dimension is a measure of the fragility of the fragmented material.

  1. Turbo codes-based image transmission for channels with multiple types of distortion.

    PubMed

    Yao, Lei; Cao, Lei

    2008-11-01

    Product codes are generally used for progressive image transmission when random errors and packet loss (or burst errors) co-exist. However, the optimal rate allocation considering both component codes gives rise to high-optimization complexity. In addition, the decoding performance may be degraded quickly when the channel varies beyond the design point. In this paper, we propose a new unequal error protection (UEP) scheme for progressive image transmission by using rate-compatible punctured Turbo codes (RCPT) and cyclic redundancy check (CRC) codes only. By sophisticatedly interleaving each coded frame, the packet loss can be converted into randomly punctured bits in a Turbo code. Therefore, error control in noisy channels with different types of errors is equivalent to dealing with random bit errors only, with reduced turbo code rates. A genetic algorithm-based method is presented to further reduce the optimization complexity. This proposed method not only gives a better performance than product codes in given channel conditions but is also more robust to the channel variation. Finally, to break down the error floor of turbo decoding, we further extend the above RCPT/CRC protection to a product code scheme by adding a Reed-Solomon (RS) code across the frames. The associated rate allocation is discussed and further improvement is demonstrated.

  2. Application Of Entropy-Constrained Vector Quantization To Waveform Coding Of Images

    NASA Astrophysics Data System (ADS)

    Chou, Philip A.

    1989-11-01

    An algorithm recently introduced to design vector quantizers for optimal joint performance with entropy codes is applied to waveform coding of monochrome images. Experiments show that when such entropy-constrained vector quantizers (ECVQs) are followed by optimal entropy codes, they outperform standard vector quantizers (VQs) that are also followed by optimal entropy codes, by several dB at equivalent bit rates. Two image sources are considered in these experiments: twenty-five 256x 256 magnetic resonance (MR) brain scans produced by a General Electric Signa at Stanford University, and six 512 x 512 (luminance component) images from the standard USC image database. The MR images are blocked into 2 x 2 components, and the USC images are blocked into 4 x 4 components. Both sources are divided into training and test sequences. Under the mean squared error distortion measure, entropy-coded ECVQ shows an improvement over entropy-coded standard VQ by 3.83 dB on the MR test sequence at 1.29 bit/pixel, and by 1.70 dB on the USC test sequence at 0.40 bit/pixel. Further experiments, in which memory is added to both ECVQ and VQ systems, are in progress.

  3. Coded excitation using periodic and unipolar M-sequences for photoacoustic imaging and flow measurement.

    PubMed

    Zhang, Haichong K; Kondo, Kengo; Yamakawa, Makoto; Shiina, Tsuyoshi

    2016-01-11

    Photoacoustic imaging is an emerging imaging technology combining optical imaging with ultrasound. Imaging of the optical absorption coefficient and flow measurement provides additional functional information compared to ultrasound. The issue with photoacoustic imaging is its low signal-to-noise ratio (SNR) due to scattering or attenuation; this is especially problematic when high pulse repetition frequency (PRF) lasers are used. In previous research, coded excitation utilizing several pseudorandom sequences has been considered as a solution for the problem. However, previously proposed temporal coding procedures using Golay codes or M-sequences are so complex that it was necessary to send a sequence twice to realize a bipolar sequence. Here, we propose a periodic and unipolar sequence (PUM), which is a periodic sequence derived from an m-sequence. The PUM can enhance signals without causing coding artifacts for single wavelength excitation. In addition, it is possible to increase the temporal resolution since the decoding start point can be set to any code in periodic irradiation, while only the first code of a sequence was available for conventional aperiodic irradiation. The SNR improvement and the increase in temporal resolution were experimentally validated through imaging evaluation and flow measurement. PMID:26832234

  4. Memory-efficient contour-based region-of-interest coding of arbitrarily large images

    NASA Astrophysics Data System (ADS)

    Sadaka, Nabil G.; Abousleman, Glen P.; Karam, Lina J.

    2007-04-01

    In this paper, we present a memory-efficient, contour-based, region-of-interest (ROI) algorithm designed for ultra-low-bit- rate compression of very large images. The proposed technique is integrated into a user-interactive wavelet-based image coding system in which multiple ROIs of any shape and size can be selected and coded efficiently. The coding technique compresses region-of-interest and background (non-ROI) information independently by allocating more bits to the selected targets and fewer bits to the background data. This allows the user to transmit large images at very low bandwidths with lossy/lossless ROI coding, while preserving the background content to a certain level for contextual purposes. Extremely large images (e.g., 65000 X 65000 pixels) with multiple large ROIs can be coded with minimal memory usage by using intelligent ROI tiling techniques. The foreground information at the encoder/decoder is independently extracted for each tile without adding extra ROI side information to the bit stream. The arbitrary ROI contour is down-sampled and differential chain coded (DCC) for efficient transmission. ROI wavelet masks for each tile are generated and processed independently to handle any size image and any shape/size of overlapping ROIs. The resulting system dramatically reduces the data storage and transmission bandwidth requirements for large digital images with multiple ROIs.

  5. Study of Optical Properties on Fractal Aggregation Using the GMM Method by Different Cluster Parameters

    NASA Astrophysics Data System (ADS)

    Chang, Kuo-En; Lin, Tang-Huang; Lien, Wei-Hung

    2015-04-01

    Anthropogenic pollutants or smoke from biomass burning contribute significantly to global particle aggregation emissions, yet their aggregate formation and resulting ensemble optical properties are poorly understood and parameterized in climate models. Particle aggregation refers to formation of clusters in a colloidal suspension. In clustering algorithms, many parameters, such as fractal dimension, number of monomers, radius of monomer, and refractive index real part and image part, will alter the geometries and characteristics of the fractal aggregation and change ensemble optical properties further. The cluster-cluster aggregation algorithm (CCA) is used to specify the geometries of soot and haze particles. In addition, the Generalized Multi-particle Mie (GMM) method is utilized to compute the Mie solution from a single particle to the multi particle case. This computer code for the calculation of the scattering by an aggregate of spheres in a fixed orientation and the experimental data have been made publicly available. This study for the model inputs of optical determination of the monomer radius, the number of monomers per cluster, and the fractal dimension is presented. The main aim in this study is to analyze and contrast several parameters of cluster aggregation aforementioned which demonstrate significant differences of optical properties using the GMM method finally. Keywords: optical properties, fractal aggregation, GMM, CCA

  6. Fractal analysis of yeast cell optical speckle

    NASA Astrophysics Data System (ADS)

    Flamholz, A.; Schneider, P. S.; Subramaniam, R.; Wong, P. K.; Lieberman, D. H.; Cheung, T. D.; Burgos, J.; Leon, K.; Romero, J.

    2006-02-01

    Steady state laser light propagation in diffuse media such as biological cells generally provide bulk parameter information, such as the mean free path and absorption, via the transmission profile. The accompanying optical speckle can be analyzed as a random spatial data series and its fractal dimension can be used to further classify biological media that show similar mean free path and absorption properties, such as those obtained from a single population. A population of yeast cells can be separated into different portions by centrifuge, and microscope analysis can be used to provide the population statistics. Fractal analysis of the speckle suggests that lower fractal dimension is associated with higher cell packing density. The spatial intensity correlation revealed that the higher cell packing gives rise to higher refractive index. A calibration sample system that behaves similar as the yeast samples in fractal dimension, spatial intensity correlation and diffusion was selected. Porous silicate slabs with different refractive index values controlled by water content were used for system calibration. The porous glass as well as the yeast random spatial data series fractal dimension was found to depend on the imaging resolution. The fractal method was also applied to fission yeast single cell fluorescent data as well as aging yeast optical data; and consistency was demonstrated. It is concluded that fractal analysis can be a high sensitivity tool for relative comparison of cell structure but that additional diffusion measurements are necessary for determining the optimal image resolution. Practical application to dental plaque bio-film and cam-pill endoscope images was also demonstrated.

  7. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  8. Infrared imaging - A validation technique for computational fluid dynamics codes used in STOVL applications

    NASA Technical Reports Server (NTRS)

    Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.

    1991-01-01

    The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.

  9. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal.

    PubMed

    Namazi, Hamidreza; Kulish, Vladimir V; Akrami, Amin

    2016-01-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the 'complex' visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. PMID:27217194

  10. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    NASA Astrophysics Data System (ADS)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  11. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    PubMed Central

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-01-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. PMID:27217194

  12. Design of a coded aperture Compton telescope imaging system (CACTIS)

    NASA Astrophysics Data System (ADS)

    Volkovskii, Alexander; Clajus, Martin; Gottesman, Stephen R.; Malik, Hans; Schwartz, Kenneth; Tumer, Evren; Tumer, Tumay; Yin, Shi

    2010-08-01

    We have developed a prototype of a scalable high-resolution direction and energy sensitive gamma-ray detection system that operates in both coded aperture (CA) and Compton scatter (CS) modes to obtain optimal efficiency and angular resolution over a wide energy range. The design consists of an active coded aperture constructed from 52 individual CZT planar detectors each measuring 3×3×6 mm3 arranged in a MURA pattern on a 10×10 grid, with a monolithic 20×20×5 mm3 pixelated (8×8) CZT array serving as the focal plane. The combined mode is achieved by using the aperture plane array for both Compton scattering of high-energy photons and as a coded mask for low-energy radiation. The prototype instrument was built using two RENA-3 test systems, one each for the aperture and the focal plane, stacked on top of each other at a distance of 130 mm. The test systems were modified to coordinate (synchronize) readout and provide coincidence information of events within a user-adjustable 40-1,280 ns window. The measured angular resolution of the device is <1 deg (17 mrad) in CA mode and is predicted to be approximately 3 deg (54 mrad) in CS mode. The energy resolution of the CZT detectors is approximately 5% FWHM at 120 keV. We will present details of the system design and initial results for the calibration and performance of the prototype.

  13. Fractal Analysis of Cervical Intraepithelial Neoplasia

    PubMed Central

    Fabrizii, Markus; Moinfar, Farid; Jelinek, Herbert F.; Karperien, Audrey; Ahammer, Helmut

    2014-01-01

    Introduction Cervical intraepithelial neoplasias (CIN) represent precursor lesions of cervical cancer. These neoplastic lesions are traditionally subdivided into three categories CIN 1, CIN 2, and CIN 3, using microscopical criteria. The relation between grades of cervical intraepithelial neoplasia (CIN) and its fractal dimension was investigated to establish a basis for an objective diagnosis using the method proposed. Methods Classical evaluation of the tissue samples was performed by an experienced gynecologic pathologist. Tissue samples were scanned and saved as digital images using Aperio scanner and software. After image segmentation the box counting method as well as multifractal methods were applied to determine the relation between fractal dimension and grades of CIN. A total of 46 images were used to compare the pathologist's neoplasia grades with the predicted groups obtained by fractal methods. Results Significant or highly significant differences between all grades of CIN could be found. The confusion matrix, comparing between pathologist's grading and predicted group by fractal methods showed a match of 87.1%. Multifractal spectra were able to differentiate between normal epithelium and low grade as well as high grade neoplasia. Conclusion Fractal dimension can be considered to be an objective parameter to grade cervical intraepithelial neoplasia. PMID:25302712

  14. Imaging flow cytometer using computation and spatially coded filter

    NASA Astrophysics Data System (ADS)

    Han, Yuanyuan; Lo, Yu-Hwa

    2016-03-01

    Flow cytometry analyzes multiple physical characteristics of a large population of single cells as cells flow in a fluid stream through an excitation light beam. Flow cytometers measure fluorescence and light scattering from which information about the biological and physical properties of individual cells are obtained. Although flow cytometers have massive statistical power due to their single cell resolution and high throughput, they produce no information about cell morphology or spatial resolution offered by microscopy, which is a much wanted feature missing in almost all flow cytometers. In this paper, we invent a method of spatial-temporal transformation to provide flow cytometers with cell imaging capabilities. The method uses mathematical algorithms and a specially designed spatial filter as the only hardware needed to give flow cytometers imaging capabilities. Instead of CCDs or any megapixel cameras found in any imaging systems, we obtain high quality image of fast moving cells in a flow cytometer using photomultiplier tube (PMT) detectors, thus obtaining high throughput in manners fully compatible with existing cytometers. In fact our approach can be applied to retrofit traditional flow cytometers to become imaging flow cytometers at a minimum cost. To prove the concept, we demonstrate cell imaging for cells travelling at a velocity of 0.2 m/s in a microfluidic channel, corresponding to a throughput of approximately 1,000 cells per second.

  15. Image gathering and coding for digital restoration: Information efficiency and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar

    1989-01-01

    Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.

  16. Implementational Aspects of the Contourlet Filter Bank and Application in Image Coding

    NASA Astrophysics Data System (ADS)

    Nguyen, Truong T.; Liu, Yilong; Chauris, Hervé; Oraintara, Soontorn

    2008-12-01

    This paper analyzed the implementational aspects of the contourlet filter bank (or the pyramidal directional filter bank (PDFB)), and considered its application in image coding. First, details of the binary tree-structured directional filter bank (DFB) are presented, including a modification to minimize the phase delay factor and necessary steps for handling rectangular images. The PDFB is viewed as an overcomplete filter bank, and the directional filters are expressed in terms of polyphase components of the pyramidal filter bank and the conventional DFB. The aliasing effect of the conventional DFB and the Laplacian pyramid to the directional filters is then considered, and the conditions for reducing this effect are presented. The new filters obtained by redesigning the PDFBs satisfying these requirements have much better frequency responses. A hybrid multiscale filter bank consisting of the PDFB at higher scales and the traditional maximally decimated wavelet filter bank at lower scales is constructed to provide a sparse image representation. A novel embedded image coding system based on the image decomposition and a morphological dilation algorithm is then presented. The coding algorithm efficiently clusters the significant coefficients using progressive morphological operations. Context models for arithmetic coding are designed to exploit the intraband dependency and the correlation existing among the neighboring directional subbands. Experimental results show that the proposed coding algorithm outperforms the current state-of-the-art wavelet-based coders, such as JPEG2000, for images with directional features.

  17. A source-channel coding approach to digital image protection and self-recovery.

    PubMed

    Sarreshtedari, Saeed; Akhaee, Mohammad Ali

    2015-07-01

    Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

  18. Scene-Level Geographic Image Classification Based on a Covariance Descriptor Using Supervised Collaborative Kernel Coding.

    PubMed

    Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi

    2016-01-01

    Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach.

  19. Scene-Level Geographic Image Classification Based on a Covariance Descriptor Using Supervised Collaborative Kernel Coding

    PubMed Central

    Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi

    2016-01-01

    Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach. PMID:26999150

  20. Fractal structures and processes

    SciTech Connect

    Bassingthwaighte, J.B.; Beard, D.A.; Percival, D.B.; Raymond, G.M.

    1996-06-01

    Fractals and chaos are closely related. Many chaotic systems have fractal features. Fractals are self-similar or self-affine structures, which means that they look much of the same when magnified or reduced in scale over a reasonably large range of scales, at least two orders of magnitude and preferably more (Mandelbrot, 1983). The methods for estimating their fractal dimensions or their Hurst coefficients, which summarize the scaling relationships and their correlation structures, are going through a rapid evolutionary phase. Fractal measures can be regarded as providing a useful statistical measure of correlated random processes. They also provide a basis for analyzing recursive processes in biology such as the growth of arborizing networks in the circulatory system, airways, or glandular ducts. {copyright} {ital 1996 American Institute of Physics.}

  1. Fractal metrology for biogeosystems analysis

    NASA Astrophysics Data System (ADS)

    Torres-Argüelles, V.; Oleschko, K.; Tarquis, A. M.; Korvin, G.; Gaona, C.; Parrot, J.-F.; Ventura-Ramos, E.

    2010-06-01

    The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes. In the present research, this pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate'' Clay) and compared in terms of roughness of the gray-intensity distribution (the measurand) quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them and to the measurement function which best fits to the experimental results. Some of the applied techniques are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc.) while the others have been recently developed by our Group. The combination of all these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM). We show the usefulness of FM through a case study of soil physical and chemical degradation applying the selected toolbox to describe and compare the main structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.

  2. Fractal Metrology for biogeosystems analysis

    NASA Astrophysics Data System (ADS)

    Torres-Argüelles, V.; Oleschko, K.; Tarquis, A. M.; Korvin, G.; Gaona, C.; Parrot, J.-F.; Ventura-Ramos, E.

    2010-11-01

    The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes of this complex system. In the present research, we studied the aggregation process as self-organizing and operating near a critical point. The structural pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate" Clay) and compared in terms of roughness of the gray-intensity distribution quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them measured in terms of standard deviation. Some of the applied methods are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc.) while the others have been recently developed by our Group. The combination of these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM). We show the usefulness of FM for complex systems analysis through a case study of the soil's physical and chemical degradation applying the selected toolbox to describe and compare the structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.

  3. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    PubMed

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  4. Domain and range decomposition methods for coded aperture x-ray coherent scatter imaging

    NASA Astrophysics Data System (ADS)

    Odinaka, Ikenna; Kaganovsky, Yan; O'Sullivan, Joseph A.; Politte, David G.; Holmgren, Andrew D.; Greenberg, Joel A.; Carin, Lawrence; Brady, David J.

    2016-05-01

    Coded aperture X-ray coherent scatter imaging is a novel modality for ascertaining the molecular structure of an object. Measurements from different spatial locations and spectral channels in the object are multiplexed through a radiopaque material (coded aperture) onto the detectors. Iterative algorithms such as penalized expectation maximization (EM) and fully separable spectrally-grouped edge-preserving reconstruction have been proposed to recover the spatially-dependent coherent scatter spectral image from the multiplexed measurements. Such image recovery methods fall into the category of domain decomposition methods since they recover independent pieces of the image at a time. Ordered subsets has also been utilized in conjunction with penalized EM to accelerate its convergence. Ordered subsets is a range decomposition method because it uses parts of the measurements at a time to recover the image. In this paper, we analyze domain and range decomposition methods as they apply to coded aperture X-ray coherent scatter imaging using a spectrally-grouped edge-preserving regularizer and discuss the implications of the increased availability of parallel computational architecture on the choice of decomposition methods. We present results of applying the decomposition methods on experimental coded aperture X-ray coherent scatter measurements. Based on the results, an underlying observation is that updating different parts of the image or using different parts of the measurements in parallel, decreases the rate of convergence, whereas using the parts sequentially can accelerate the rate of convergence.

  5. Optimal pyramidal and subband decompositions for hierarchical coding of noisy and quantized images.

    PubMed

    Gerassimos Strintzis, M

    1998-01-01

    Optimal hierarchical coding is sought, for progressive or scalable image transmission, by minimizing the variance of the error difference between the original image and its lower resolution renditions. The optimal, according to the above criterion, pyramidal and subband image coders are determined for images subject to corruption by quantization or transmission noise. Given arbitrary analysis filters and assuming adequate knowledge of the noise statistics, optimal synthesis filters are found. The optimal analysis filters are subsequently determined, leading to formulas for globally optimal structures for pyramidal and subband image decompositions. Experimental results illustrate the implementation and performance of the optimal coders.

  6. Lossy compression of MERIS superspectral images with exogenous quasi optimal coding transforms

    NASA Astrophysics Data System (ADS)

    Akam Bita, Isidore Paul; Barret, Michel; Dalla Vedova, Florio; Gutzwiller, Jean-Louis

    2009-08-01

    Our research focuses on reducing complexity of hyperspectral image codecs based on transform and/or subband coding, so they can be on-board a satellite. It is well-known that the Karhunen-Loève Transform (KLT) can be sub-optimal in transform coding for non Gaussian data. However, it is generally recommended as the best calculable linear coding transform in practice. Now, the concept and the computation of optimal coding transforms (OCT), under low restrictive hypotheses at high bit-rates, were carried out and adapted to a compression scheme compatible with both the JPEG2000 Part2 standard and the CCSDS recommendations for on-board satellite image compression, leading to the concept and computation of Optimal Spectral Transforms (OST). These linear transforms are optimal for reducing spectral redundancies of multi- or hyper-spectral images, when the spatial redundancies are reduced with a fixed 2-D Discrete Wavelet Transform (DWT). The problem of OST is their heavy computational cost. In this paper we present the performances in coding of a quasi optimal spectral transform, called exogenous OrthOST, obtained by learning an orthogonal OST on a sample of superspectral images from the spectrometer MERIS. The performances are presented in terms of bit-rate versus distortion for four various distortions and compared to the ones of the KLT. We observe good performances of the exogenous OrthOST, as it was the case on Hyperion hyper-spectral images in previous works.

  7. DCT/DST-based transform coding for intra prediction in image/video coding.

    PubMed

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  8. Hierarchical prediction and context adaptive coding for lossless color image compression.

    PubMed

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  9. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths. PMID:25321062

  10. Lensless coded-aperture imaging with separable Doubly-Toeplitz masks

    NASA Astrophysics Data System (ADS)

    DeWeert, Michael J.; Farm, Brian P.

    2015-02-01

    In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within a reasonable weight and volume. One solution is to use coded apertures-opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, then the images can be decoded and a clear image computed. Recently, computational imaging and the search for a means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, modified uniformly redundant arrays (MURAs), are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: (1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and (2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. We present a new class of coded apertures, separable Doubly-Toeplitz masks, which are efficiently decodable even for very large images-orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-light-modulators. Imaging experiments confirmed the effectiveness of separable Doubly-Toeplitz masks-images collected in natural light of extended outdoor scenes are rendered clearly.

  11. Image enhancement using MCNP5 code and MATLAB in neutron radiography.

    PubMed

    Tharwat, Montaser; Mohamed, Nader; Mongy, T

    2014-07-01

    This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work.

  12. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  13. Fractals in microscopy.

    PubMed

    Landini, G

    2011-01-01

    Fractal geometry, developed by B. Mandelbrot, has provided new key concepts necessary to the understanding and quantification of some aspects of pattern and shape randomness, irregularity, complexity and self-similarity. In the field of microscopy, fractals have profound implications in relation to the effects of magnification and scaling on morphology and to the methodological approaches necessary to measure self-similar structures. In this article are reviewed the fundamental concepts on which fractal geometry is based, their relevance to the microscopy field as well as a number of technical details that can help improving the robustness of morphological analyses when applied to microscopy problems.

  14. The influence of the property of random coded patterns on fluctuation-correlation ghost imaging

    NASA Astrophysics Data System (ADS)

    Wang, Chenglong; Gong, Wenlin; Shao, Xuehui; Han, Shensheng

    2016-06-01

    According to the reconstruction feature of fluctuation-correlation ghost imaging (GI), we define a normalized characteristic matrix and the influence of the property of random coded patterns on GI is investigated based on the theory of matrix analysis. Both simulative and experimental results demonstrate that for different random coded patterns, the quality of fluctuation-correlation GI can be predicted by some parameters extracted from the normalized characteristic matrix, which suggests its potential application in the optimization of random coded patterns for GI system.

  15. A symbol-map wavelet zero-tree image coding algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Liu, Wenyao; Peng, Xiang; Liu, Xiaoli

    2008-03-01

    A improved SPIHT image compression algorithm called symbol-map zero-tree coding algorithm (SMZTC) is proposed in this paper based on wavelet transform. The SPIHT algorithm is a high efficiency wavelet coefficients coding method and have good image compressing effect, but it has more complexity and need too much memory. The algorithm presented in this paper utilizes two small symbol-maps Mark and FC to store the status of coefficients and zero tree sets during coding procedure so as to reduce the memory requirement. By this strategy, the memory cost is reduced distinctly as well as the scanning speed of coefficients is improved. Those comparison experiments for 512 by 512 images are done with some other zerotree coding algorithms, such as SPIHT, NLS method. During the experiments, the biorthogonal 9/7 lifting wavelet transform is used to image transform. The results of coding experiments show that this algorithm speed of codec is improved significantly, and compression-ratio is almost uniformed with SPIHT algorithm.

  16. Fractal frontiers in cardiovascular magnetic resonance: towards clinical implementation.

    PubMed

    Captur, Gabriella; Karperien, Audrey L; Li, Chunming; Zemrak, Filip; Tobon-Gomez, Catalina; Gao, Xuexin; Bluemke, David A; Elliott, Perry M; Petersen, Steffen E; Moon, James C

    2015-09-07

    Many of the structures and parameters that are detected, measured and reported in cardiovascular magnetic resonance (CMR) have at least some properties that are fractal, meaning complex and self-similar at different scales. To date however, there has been little use of fractal geometry in CMR; by comparison, many more applications of fractal analysis have been published in MR imaging of the brain.This review explains the fundamental principles of fractal geometry, places the fractal dimension into a meaningful context within the realms of Euclidean and topological space, and defines its role in digital image processing. It summarises the basic mathematics, highlights strengths and potential limitations of its application to biomedical imaging, shows key current examples and suggests a simple route for its successful clinical implementation by the CMR community.By simplifying some of the more abstract concepts of deterministic fractals, this review invites CMR scientists (clinicians, technologists, physicists) to experiment with fractal analysis as a means of developing the next generation of intelligent quantitative cardiac imaging tools.

  17. Improving the Calibration of Image Sensors Based on IOFBs, Using Differential Gray-Code Space Encoding

    PubMed Central

    Fernández, Pedro R.; Galilea, José Luis Lázaro; Vicente, Alfredo Gardel; Muñoz, Ignacio Bravo; Cano García, Ángel E.; Vázquez, Carlos Luna

    2012-01-01

    This paper presents a fast calibration method to determine the transfer function for spatial correspondences in image transmission devices with Incoherent Optical Fiber Bundles (IOFBs), by performing a scan of the input, using differential patterns generated from a Gray code (Differential Gray-Code Space Encoding, DGSE). The results demonstrate that this technique provides a noticeable reduction in processing time and better quality of the reconstructed image compared to other, previously employed techniques, such as point or fringe scanning, or even other known space encoding techniques. PMID:23012530

  18. Improving the calibration of image sensors based on IOFBs, using differential gray-code space encoding.

    PubMed

    Fernández, Pedro R; Lázaro Galilea, José Luis; Gardel Vicente, Alfredo; Bravo Muñoz, Ignacio; Cano García, Angel E; Luna Vázquez, Carlos

    2012-01-01

    This paper presents a fast calibration method to determine the transfer function for spatial correspondences in image transmission devices with Incoherent Optical Fiber Bundles (IOFBs), by performing a scan of the input, using differential patterns generated from a Gray code (Differential Gray-Code Space Encoding, DGSE). The results demonstrate that this technique provides a noticeable reduction in processing time and better quality of the reconstructed image compared to other, previously employed techniques, such as point or fringe scanning, or even other known space encoding techniques.

  19. Coded aperture imaging - Predicted performance of uniformly redundant arrays

    NASA Technical Reports Server (NTRS)

    Fenimore, E. E.

    1978-01-01

    It is noted that uniformly redundant arrays (URAs) have autocorrelation functions with perfectly flat sidelobes. A generalized signal-to-noise equation has been developed to predict URA performance. The signal-to-noise value is formulated as a function of aperture transmission or density, the ratio of the intensity of a resolution element to the integrated source intensity, and the ratio of detector background noise to the integrated intensity. It is shown that the only two-dimensional URAs known have a transmission of one half. This is not a great limitation because a nonoptimum transmission of one half never reduces the signal-to-noise ratio more than 30%. The reconstructed URA image contains practically uniform noise, regardless of the object structure. URA's improvement over the single-pinhole camera is much larger for high-intensity points than for low-intensity points.

  20. A new pad-based neutron detector for stereo coded aperture thermal neutron imaging

    NASA Astrophysics Data System (ADS)

    Dioszegi, I.; Yu, B.; Smith, G.; Schaknowski, N.; Fried, J.; Vanier, P. E.; Salwen, C.; Forman, L.

    2014-09-01

    A new coded aperture thermal neutron imager system has been developed at Brookhaven National Laboratory. The cameras use a new type of position-sensitive 3He-filled ionization chamber, in which an anode plane is composed of an array of pads with independent acquisition channels. The charge is collected on each of the individual 5x5 mm2 anode pads, (48x48 in total, corresponding to 24x24 cm2 sensitive area) and read out by application specific integrated circuits (ASICs). The new design has several advantages for coded-aperture imaging applications in the field, compared to the previous generation of wire-grid based neutron detectors. Among these are its rugged design, lighter weight and use of non-flammable stopping gas. The pad-based readout occurs in parallel circuits, making it capable of high count rates, and also suitable to perform data analysis and imaging on an event-by-event basis. The spatial resolution of the detector can be better than the pixel size by using a charge sharing algorithm. In this paper we will report on the development and performance of the new pad-based neutron camera, describe a charge sharing algorithm to achieve sub-pixel spatial resolution and present the first stereoscopic coded aperture images of thermalized neutron sources using the new coded aperture thermal neutron imager system.

  1. Context-Aware and Locality-Constrained Coding for Image Categorization

    PubMed Central

    2014-01-01

    Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts. PMID:24977215

  2. Foolin' with Fractals.

    ERIC Educational Resources Information Center

    Clark, Garry

    1999-01-01

    Reports on a mathematical investigation of fractals and highlights the thinking involved, problem solving strategies used, generalizing skills required, the role of technology, and the role of mathematics. (ASK)

  3. Quantum image pseudocolor coding based on the density-stratified method

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  4. Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging.

    PubMed

    Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin

    2015-12-01

    We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205

  5. Compressive spectral polarization imaging with coded micropolarizer array

    NASA Astrophysics Data System (ADS)

    Fu, Chen; Arguello, Henry; Sadler, Brian M.; Arce, Gonzalo R.

    2015-05-01

    We present a compressive spectral polarization imager based on a prism which is rotated to different angles as the measurement shots are taken, and a colored detector with a micropolarizer array. The prism shears the scene along one spatial axis according to its wavelength components. The scene is then projected to different locations on the detector as measurement shots are taken. Composed of 0°, 45°, 90°, 135° linear micropolarizers, the pixels of the micropolarizer array matched to that of the colored detector, thus the first three Stokes parameters of the scene are compressively sensed. The four dimensional (4D) data cube is thus projected onto the two dimensional (2D) FPA. Designed patterns for the micropolarizer and the colored detector are applied so as to improve the reconstruction problem. The 4D spectral-polarization data cube is reconstructed from the 2D measurements via nonlinear optimization with sparsity constraints. Computer simulations are performed and the performance of designed patterns is compared with random patterns.

  6. Classification of impervious land cover using fractals

    NASA Astrophysics Data System (ADS)

    Quackenbush, Lindi J.

    Runoff from urban areas is a leading source of nonpoint source pollution in estuaries, lakes, and streams. The extent and type of impervious land cover are considered to be critical factors in evaluating runoff amounts and the potential for environmental damage. Land cover information for watershed modeling is frequently derived using remote sensing techniques, and improvements in image classification are expected to enhance the reliability of runoff models. In order to understand potential pollutant loads there is a need to characterize impervious areas based on land use. However, distinguishing between impervious features such as roofs and roads using only spectral information is often challenging due to the similarity in construction materials. Since spectral information alone is often lacking, spatial complexity measured using fractal dimension was analyzed to determine its utility in performing detailed classification. Fractal dimension describes the complexity of curves and surfaces in non-integer dimensions. Statistical analysis demonstrated that fractal dimension varies between roofs, roads, and driveways. Analysis also observed the impact of scale by determining statistical differences in fractal dimension, based on the size of the window considered and the ground sampled distance of the pixels under consideration. The statistical differences in fractal dimension translated to minor improvements in classification accuracy when separating roofs, roads, and driveways.

  7. Rheological and fractal hydrodynamics of aerobic granules.

    PubMed

    Tijani, H I; Abdullah, N; Yuzir, A; Ujang, Zaini

    2015-06-01

    The structural and hydrodynamic features for granules were characterized using settling experiments, predefined mathematical simulations and ImageJ-particle analyses. This study describes the rheological characterization of these biologically immobilized aggregates under non-Newtonian flows. The second order dimensional analysis defined as D2=1.795 for native clusters and D2=1.099 for dewatered clusters and a characteristic three-dimensional fractal dimension of 2.46 depicts that these relatively porous and differentially permeable fractals had a structural configuration in close proximity with that described for a compact sphere formed via cluster-cluster aggregation. The three-dimensional fractal dimension calculated via settling-fractal correlation, U∝l(D) to characterize immobilized granules validates the quantitative measurements used for describing its structural integrity and aggregate complexity. These results suggest that scaling relationships based on fractal geometry are vital for quantifying the effects of different laminar conditions on the aggregates' morphology and characteristics such as density, porosity, and projected surface area.

  8. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    PubMed

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  9. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark

    SciTech Connect

    Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei; Chen Da; Li Zhenghong; Wu Yuelei

    2012-11-15

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  10. Fractal design concepts for stretchable electronics.

    PubMed

    Fan, Jonathan A; Yeo, Woon-Hong; Su, Yewang; Hattori, Yoshiaki; Lee, Woosik; Jung, Sung-Young; Zhang, Yihui; Liu, Zhuangjian; Cheng, Huanyu; Falgout, Leo; Bajema, Mike; Coleman, Todd; Gregoire, Dan; Larsen, Ryan J; Huang, Yonggang; Rogers, John A

    2014-01-01

    Stretchable electronics provide a foundation for applications that exceed the scope of conventional wafer and circuit board technologies due to their unique capacity to integrate with soft materials and curvilinear surfaces. The range of possibilities is predicated on the development of device architectures that simultaneously offer advanced electronic function and compliant mechanics. Here we report that thin films of hard electronic materials patterned in deterministic fractal motifs and bonded to elastomers enable unusual mechanics with important implications in stretchable device design. In particular, we demonstrate the utility of Peano, Greek cross, Vicsek and other fractal constructs to yield space-filling structures of electronic materials, including monocrystalline silicon, for electrophysiological sensors, precision monitors and actuators, and radio frequency antennas. These devices support conformal mounting on the skin and have unique properties such as invisibility under magnetic resonance imaging. The results suggest that fractal-based layouts represent important strategies for hard-soft materials integration.

  11. Retinal fractals and acute lacunar stroke.

    PubMed

    Cheung, Ning; Liew, Gerald; Lindley, Richard I; Liu, Erica Y; Wang, Jie Jin; Hand, Peter; Baker, Michelle; Mitchell, Paul; Wong, Tien Y

    2010-07-01

    This study aimed to determine whether retinal fractal dimension, a quantitative measure of microvascular branching complexity and density, is associated with lacunar stroke. A total of 392 patients presenting with acute ischemic stroke had retinal fractal dimension measured from digital photographs, and lacunar infarct ascertained from brain imaging. After adjusting for age, gender, and vascular risk factors, higher retinal fractal dimension (highest vs lowest quartile and per standard deviation increase) was independently and positively associated with lacunar stroke (odds ratio [OR], 4.27; 95% confidence interval [CI], 1.49-12.17 and OR, 1.85; 95% CI, 1.20-2.84, respectively). Increased retinal microvascular complexity and density is associated with lacunar stroke.

  12. ``the Human BRAIN & Fractal quantum mechanics''

    NASA Astrophysics Data System (ADS)

    Rosary-Oyong, Se, Glory

    In mtDNA ever retrieved from Iman Tuassoly, et.al:Multifractal analysis of chaos game representation images of mtDNA''.Enhances the price & valuetales of HE. Prof. Dr-Ing. B.J. HABIBIE's N-219, in J. Bacteriology, Nov 1973 sought:'' 219 exist as separate plasmidDNA species in E.coli & Salmonella panama'' related to ``the brain 2 distinct molecular forms of the (Na,K)-ATPase..'' & ``neuron maintains different concentration of ions(charged atoms'' thorough Rabi & Heisenber Hamiltonian. Further, after ``fractal space time are geometric analogue of relativistic quantum mechanics''[Ord], sought L.Marek Crnjac: ``Chaotic fractals at the root of relativistic quantum physics''& from famous Nottale: ``Scale relativity & fractal space-time:''Application to Quantum Physics , Cosmology & Chaotic systems'',1995. Acknowledgements to HE. Mr. H. TUK SETYOHADI, Jl. Sriwijaya Raya 3, South-Jakarta, INDONESIA.

  13. Fractal design concepts for stretchable electronics

    NASA Astrophysics Data System (ADS)

    Fan, Jonathan A.; Yeo, Woon-Hong; Su, Yewang; Hattori, Yoshiaki; Lee, Woosik; Jung, Sung-Young; Zhang, Yihui; Liu, Zhuangjian; Cheng, Huanyu; Falgout, Leo; Bajema, Mike; Coleman, Todd; Gregoire, Dan; Larsen, Ryan J.; Huang, Yonggang; Rogers, John A.

    2014-02-01

    Stretchable electronics provide a foundation for applications that exceed the scope of conventional wafer and circuit board technologies due to their unique capacity to integrate with soft materials and curvilinear surfaces. The range of possibilities is predicated on the development of device architectures that simultaneously offer advanced electronic function and compliant mechanics. Here we report that thin films of hard electronic materials patterned in deterministic fractal motifs and bonded to elastomers enable unusual mechanics with important implications in stretchable device design. In particular, we demonstrate the utility of Peano, Greek cross, Vicsek and other fractal constructs to yield space-filling structures of electronic materials, including monocrystalline silicon, for electrophysiological sensors, precision monitors and actuators, and radio frequency antennas. These devices support conformal mounting on the skin and have unique properties such as invisibility under magnetic resonance imaging. The results suggest that fractal-based layouts represent important strategies for hard-soft materials integration.

  14. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification.

  15. Fractal characteristics for binary noise radar waveform

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2016-05-01

    Noise radars have many advantages over conventional radars and receive great attentions recently. The performance of a noise radar is determined by its waveforms. Investigating characteristics of noise radar waveforms has significant value for evaluating noise radar performance. In this paper, we use binomial distribution theory to analyze general characteristics of binary phase coded (BPC) noise waveforms. Focusing on aperiodic autocorrelation function, we demonstrate that the probability distributions of sidelobes for a BPC noise waveform depend on the distances of these sidelobes to the mainlobe. The closer a sidelobe to the mainlobe, the higher the probability for this sidelobe to be a maximum sidelobe. We also develop Monte Carlo framework to explore the characteristics that are difficult to investigate analytically. Through Monte Carlo experiments, we reveal the Fractal relationship between the code length and the maximum sidelobe value for BPC waveforms, and propose using fractal dimension to measure noise waveform performance.

  16. Ultrasonic imaging of human tooth using chirp-coded nonlinear time reversal acoustics

    NASA Astrophysics Data System (ADS)

    Santos, Serge Dos; Domenjoud, Mathieu; Prevorovsky, Zdenek

    2010-01-01

    We report in this paper the first use of TR-NEWS, included chirp-coded excitation and applied for ultrasonic imaging of human tooth. Feasibility of the focusing of ultrasound at the surface of the human tooth is demonstrated and potentiality of a new echodentography of the dentine-enamel interface using TR-NEWS is discussed.

  17. Low-memory-usage image coding with line-based wavelet transform

    NASA Astrophysics Data System (ADS)

    Ye, Linning; Guo, Jiangling; Nutter, Brian; Mitra, Sunanda

    2011-02-01

    When compared to the traditional row-column wavelet transform, the line-based wavelet transform can achieve significant memory savings. However, the design of an image codec using the line-based wavelet transform is an intricate task because of the irregular order in which the wavelet coefficients are generated. The independent block coding feature of JPEG2000 makes it work effectively with the line-based wavelet transform. However, with wavelet tree-based image codecs, such as set partitioning in hierarchical trees, the memory usage of the codecs does not realize significant advantage with the line-based wavelet transform because many wavelet coefficients must be buffered before the coding starts. In this paper, the line-based wavelet transform was utilized to facilitate backward coding of wavelet trees (BCWT). Although the BCWT algorithm is a wavelet tree-based algorithm, its coding order differs from that of the traditional wavelet tree-based algorithms, which allows the proposed line-based image codec to become more memory efficient than other line-based image codecs, including line-based JPEG2000, while still offering comparable rate distortion performance and much lower system complexity.

  18. Coded aperture coherent scatter imaging for breast cancer detection: a Monte Carlo evaluation

    NASA Astrophysics Data System (ADS)

    Lakshmanan, Manu N.; Morris, Robert E.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-03-01

    It is known that conventional x-ray imaging provides a maximum contrast between cancerous and healthy fibroglandular breast tissues of 3% based on their linear x-ray attenuation coefficients at 17.5 keV, whereas coherent scatter signal provides a maximum contrast of 19% based on their differential coherent scatter cross sections. Therefore in order to exploit this potential contrast, we seek to evaluate the performance of a coded- aperture coherent scatter imaging system for breast cancer detection and investigate its accuracy using Monte Carlo simulations. In the simulations we modeled our experimental system, which consists of a raster-scanned pencil beam of x-rays, a bismuth-tin coded aperture mask comprised of a repeating slit pattern with 2-mm periodicity, and a linear-array of 128 detector pixels with 6.5-keV energy resolution. The breast tissue that was scanned comprised a 3-cm sample taken from a patient-based XCAT breast phantom containing a tomosynthesis- based realistic simulated lesion. The differential coherent scatter cross section was reconstructed at each pixel in the image using an iterative reconstruction algorithm. Each pixel in the reconstructed image was then classified as being either air or the type of breast tissue with which its normalized reconstructed differential coherent scatter cross section had the highest correlation coefficient. Comparison of the final tissue classification results with the ground truth image showed that the coded aperture imaging technique has a cancerous pixel detection sensitivity (correct identification of cancerous pixels), specificity (correctly ruling out healthy pixels as not being cancer) and accuracy of 92.4%, 91.9% and 92.0%, respectively. Our Monte Carlo evaluation of our experimental coded aperture coherent scatter imaging system shows that it is able to exploit the greater contrast available from coherently scattered x-rays to increase the accuracy of detecting cancerous regions within the breast.

  19. The fractal structure of the mitochondrial genomes

    NASA Astrophysics Data System (ADS)

    Oiwa, Nestor N.; Glazier, James A.

    2002-08-01

    The mitochondrial DNA genome has a definite multifractal structure. We show that loops, hairpins and inverted palindromes are responsible for this self-similarity. We can thus establish a definite relation between the function of subsequences and their fractal dimension. Intriguingly, protein coding DNAs also exhibit palindromic structures, although they do not appear in the sequence of amino acids. These structures may reflect the stabilization and transcriptional control of DNA or the control of posttranscriptional editing of mRNA.

  20. Designing an efficient LT-code with unequal error protection for image transmission

    NASA Astrophysics Data System (ADS)

    S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.

    2015-10-01

    The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression

  1. Model-based coding of facial images based on facial muscle motion through isodensity maps

    NASA Astrophysics Data System (ADS)

    So, Ikken; Nakamura, Osamu; Minami, Toshi

    1991-11-01

    A model-based coding system has come under serious consideration for the next generation of image coding schemes, aimed at greater efficiency in TV telephone and TV conference systems. In this model-based coding system, the sender's model image is transmitted and stored at the receiving side before the start of the conversation. During the conversation, feature points are extracted from the facial image of the sender and are transmitted to the receiver. The facial expression of the sender facial is reconstructed from the feature points received and a wireframed model constructed at the receiving side. However, the conventional methods have the following problems: (1) Extreme changes of the gray level, such as in wrinkles caused by change of expression, cannot be reconstructed at the receiving side. (2) Extraction of stable feature points from facial images with irregular features such as spectacles or facial hair is very difficult. To cope with the first problem, a new algorithm based on isodensity lines which can represent detailed changes in expression by density correction has already been proposed and good results obtained. As for the second problem, we propose in this paper a new algorithm to reconstruct facial images by transmitting other feature points extracted from isodensity maps.

  2. Imaging x-ray sources at a finite distance in coded-mask instruments.

    PubMed

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-07-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction. PMID:18594598

  3. Imaging x-ray sources at a finite distance in coded-mask instruments

    SciTech Connect

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-07-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction.

  4. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  5. Imaging x-ray sources at a finite distance in coded-mask instruments.

    PubMed

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-07-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction.

  6. Source-Search Sensitivity of a Large-Area, Coded-Aperture, Gamma-Ray Imager

    SciTech Connect

    Ziock, K P; Collins, J W; Craig, W W; Fabris, L; Lanza, R C; Gallagher, S; Horn, B P; Madden, N W; Smith, E; Woodring, M L

    2004-10-27

    We have recently completed a large-area, coded-aperture, gamma-ray imager for use in searching for radiation sources. The instrument was constructed to verify that weak point sources can be detected at considerable distances if one uses imaging to overcome fluctuations in the natural background. The instrument uses a rank-19, one-dimensional coded aperture to cast shadow patterns onto a 0.57 m{sup 2} NaI(Tl) detector composed of 57 individual cubes each 10 cm on a side. These are arranged in a 19 x 3 array. The mask is composed of four-centimeter thick, one-meter high, 10-cm wide lead blocks. The instrument is mounted in the back of a small truck from which images are obtained as one drives through a region. Results of first measurements obtained with the system are presented.

  7. Algorithm for image retrieval based on edge gradient orientation statistical code.

    PubMed

    Zeng, Jiexian; Zhao, Yonggang; Li, Weiye; Fu, Xiang

    2014-01-01

    Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.

  8. Coded aperture x-ray diffraction imaging with transmission computed tomography side-information

    NASA Astrophysics Data System (ADS)

    Odinaka, Ikenna; Greenberg, Joel A.; Kaganovsky, Yan; Holmgren, Andrew; Hassan, Mehadi; Politte, David G.; O'Sullivan, Joseph A.; Carin, Lawrence; Brady, David J.

    2016-03-01

    Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.

  9. Phase Contrast Imaging with Coded Apertures Using Laboratory-Based X-ray Sources

    SciTech Connect

    Ignatyev, K.; Munro, P. R. T.; Speller, R. D.; Olivo, A.

    2011-09-09

    X-ray phase contrast imaging is a powerful technique that allows detection of changes in the phase of x-ray wavefronts as they pass through a sample. As a result, details not visible in conventional x-ray absorption imaging can be detected. Until recently the majority of applications of phase contrast imaging were at synchrotron facilities due to the availability of their high flux and coherence; however, a number of techniques have appeared recently that allow phase contrast imaging to be performed using laboratory sources. Here we describe a phase contrast imaging technique, developed at University College London, that uses two coded apertures. The x-ray beam is shaped by the pre-sample aperture, and small deviations in the x-ray propagation direction are detected with the help of the detector aperture. In contrast with other methods, it has a much more relaxed requirement for the source size (it works with source sizes up to 100 {mu}m). A working prototype coded-aperture system has been built. An x-ray detector with directly deposited columnar CsI has been used to minimize signal spill-over into neighboring pixels. Phase contrast images obtained with the system have demonstrated its effectiveness for imaging low-absorption materials.

  10. Fractal Patterns and Chaos Games

    ERIC Educational Resources Information Center

    Devaney, Robert L.

    2004-01-01

    Teachers incorporate the chaos game and the concept of a fractal into various areas of the algebra and geometry curriculum. The chaos game approach to fractals provides teachers with an opportunity to help students comprehend the geometry of affine transformations.

  11. Building Fractal Models with Manipulatives.

    ERIC Educational Resources Information Center

    Coes, Loring

    1993-01-01

    Uses manipulative materials to build and examine geometric models that simulate the self-similarity properties of fractals. Examples are discussed in two dimensions, three dimensions, and the fractal dimension. Discusses how models can be misleading. (Contains 10 references.) (MDH)

  12. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018

  13. Fractals for Geoengineering

    NASA Astrophysics Data System (ADS)

    Oleshko, Klaudia; de Jesús Correa López, María; Romero, Alejandro; Ramírez, Victor; Pérez, Olga

    2016-04-01

    The effectiveness of fractal toolbox to capture the scaling or fractal probability distribution, and simply fractal statistics of main hydrocarbon reservoir attributes, was highlighted by Mandelbrot (1995) and confirmed by several researchers (Zhao et al., 2015). Notwithstanding, after more than twenty years, it's still common the opinion that fractals are not useful for the petroleum engineers and especially for Geoengineering (Corbett, 2012). In spite of this negative background, we have successfully applied the fractal and multifractal techniques to our project entitled "Petroleum Reservoir as a Fractal Reactor" (2013 up to now). The distinguishable feature of Fractal Reservoir is the irregular shapes and rough pore/solid distributions (Siler, 2007), observed across a broad range of scales (from SEM to seismic). At the beginning, we have accomplished the detailed analysis of Nelson and Kibler (2003) Catalog of Porosity and Permeability, created for the core plugs of siliciclastic rocks (around ten thousand data were compared). We enriched this Catalog by more than two thousand data extracted from the last ten years publications on PoroPerm (Corbett, 2012) in carbonates deposits, as well as by our own data from one of the PEMEX, Mexico, oil fields. The strong power law scaling behavior was documented for the major part of these data from the geological deposits of contrasting genesis. Based on these results and taking into account the basic principles and models of the Physics of Fractals, introduced by Per Back and Kan Chen (1989), we have developed new software (Muukíl Kaab), useful to process the multiscale geological and geophysical information and to integrate the static geological and petrophysical reservoir models to dynamic ones. The new type of fractal numerical model with dynamical power law relations among the shapes and sizes of mesh' cells was designed and calibrated in the studied area. The statistically sound power law relations were established

  14. Source-optimized irregular repeat accumulate codes with inherent unequal error protection capabilities and their application to scalable image transmission.

    PubMed

    Lan, Ching-Fu; Xiong, Zixiang; Narayanan, Krishna R

    2006-07-01

    The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases. PMID:16830898

  15. Sparse coding for hyperspectral images using random dictionary and soft thresholding

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Iftekharuddin, Khan; Li, Jiang

    2012-05-01

    Many techniques have been recently developed for classification of hyperspectral images (HSI) including support vector machines (SVMs), neural networks and graph-based methods. To achieve good performances for the classification, a good feature representation of the HSI is essential. A great deal of feature extraction algorithms have been developed such as principal component analysis (PCA) and independent component analysis (ICA). Sparse coding has recently shown state-of-the-art performances in many applications including image classification. In this paper, we present a feature extraction method for HSI data motivated by a recently developed sparse coding based image representation technique. Sparse coding consists of a dictionary learning step and an encoding step. In the learning step, we compared two different methods, L1-penalized sparse coding and random selection for the dictionary learning. In the encoding step, we utilized a soft threshold activation function to obtain feature representations for HSI. We applied the proposed algorithm to a HSI dataset collected at the Kennedy Space Center (KSC) and compared our results with those obtained by a recently proposed method, supervised locally linear embedding weighted k-nearest-neighbor (SLLE-WkNN) classifier. We have achieved better performances on this dataset in terms of the overall accuracy with a random dictionary. We conclude that this simple feature extraction framework might lead to more efficient HSI classification systems.

  16. IR performance study of an adaptive coded aperture "diffractive imaging" system employing MEMS "eyelid shutter" technologies

    NASA Astrophysics Data System (ADS)

    Mahalanobis, A.; Reyner, C.; Patel, H.; Haberfelde, T.; Brady, David; Neifeld, Mark; Kumar, B. V. K. Vijaya; Rogers, Stanley

    2007-09-01

    Adaptive coded aperture sensing is an emerging technology enabling real time, wide-area IR/visible sensing and imaging. Exploiting unique imaging architectures, adaptive coded aperture sensors achieve wide field of view, near-instantaneous optical path repositioning, and high resolution while reducing weight, power consumption and cost of air- and space born sensors. Such sensors may be used for military, civilian, or commercial applications in all optical bands but there is special interest in diffraction imaging sensors for IR applications. Extension of coded apertures from Visible to the MWIR introduces the effects of diffraction and other distortions not observed in shorter wavelength systems. A new approach is being developed under the DARPA/SPO funded LACOSTE (Large Area Coverage Optical search-while Track and Engage) program, that addresses the effects of diffraction while gaining the benefits of coded apertures, thus providing flexibility to vary resolution, possess sufficient light gathering power, and achieve a wide field of view (WFOV). The photonic MEMS-Eyelid "sub-aperture" array technology is currently being instantiated in this DARPA program to be the heart of conducting the flow (heartbeat) of the incoming signal. However, packaging and scalability are critical factors for the MEMS "sub-aperture" technology which will determine system efficacy as well as military and commercial usefulness. As larger arrays with 1,000,000+ sub-apertures are produced for this LACOSTE effort, the available Degrees of Freedom (DOF) will enable better spatial resolution, control and refinement on the coding for the system. Studies (SNR simulations) will be performed (based on the Adaptive Coded Aperture algorithm implementation) to determine the efficacy of this diffractive MEMS approach and to determine the available system budget based on simulated bi-static shutter-element DOF degradation (1%, 5%, 10%, 20%, etc..) trials until the degradation level where it is

  17. Fractal dynamics of earthquakes

    SciTech Connect

    Bak, P.; Chen, K.

    1995-05-01

    Many objects in nature, from mountain landscapes to electrical breakdown and turbulence, have a self-similar fractal spatial structure. It seems obvious that to understand the origin of self-similar structures, one must understand the nature of the dynamical processes that created them: temporal and spatial properties must necessarily be completely interwoven. This is particularly true for earthquakes, which have a variety of fractal aspects. The distribution of energy released during earthquakes is given by the Gutenberg-Richter power law. The distribution of epicenters appears to be fractal with dimension D {approx} 1--1.3. The number of after shocks decay as a function of time according to the Omori power law. There have been several attempts to explain the Gutenberg-Richter law by starting from a fractal distribution of faults or stresses. But this is a hen-and-egg approach: to explain the Gutenberg-Richter law, one assumes the existence of another power-law--the fractal distribution. The authors present results of a simple stick slip model of earthquakes, which evolves to a self-organized critical state. Emphasis is on demonstrating that empirical power laws for earthquakes indicate that the Earth`s crust is at the critical state, with no typical time, space, or energy scale. Of course the model is tremendously oversimplified; however in analogy with equilibrium phenomena they do not expect criticality to depend on details of the model (universality).

  18. [Recent progress of research and applications of fractal and its theories in medicine].

    PubMed

    Cai, Congbo; Wang, Ping

    2014-10-01

    Fractal, a mathematics concept, is used to describe an image of self-similarity and scale invariance. Some organisms have been discovered with the fractal characteristics, such as cerebral cortex surface, retinal vessel structure, cardiovascular network, and trabecular bone, etc. It has been preliminarily confirmed that the three-dimensional structure of cells cultured in vitro could be significantly enhanced by bionic fractal surface. Moreover, fractal theory in clinical research will help early diagnosis and treatment of diseases, reducing the patient's pain and suffering. The development process of diseases in the human body can be expressed by the fractal theories parameter. It is of considerable significance to retrospectively review the preparation and application of fractal surface and its diagnostic value in medicine. This paper gives an application of fractal and its theories in the medical science, based on the research achievements in our laboratory.

  19. Pond fractals in a tidal flat.

    PubMed

    Cael, B B; Lambert, Bennett; Bisson, Kelsey

    2015-11-01

    Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces. PMID:26651668

  20. Pond fractals in a tidal flat.

    PubMed

    Cael, B B; Lambert, Bennett; Bisson, Kelsey

    2015-11-01

    Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.

  1. Increasing average power in medical ultrasonic endoscope imaging system by coded excitation

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodong; Zhou, Hao; Wen, Shijie; Yu, Daoyin

    2008-12-01

    Medical ultrasonic endoscope is the combination of electronic endoscope and ultrasonic sensor technology. Ultrasonic endoscope sends the ultrasonic probe into coelom through biopsy channel of electronic endoscope and rotates it by a micro pre-motor, which requires that the length of ultrasonic probe is no more than 14mm and the diameter is no more than 2.2mm. As a result, the ultrasonic excitation power is very low and it is difficult to obtain a sharp image. In order to increase the energy and SNR of ultrasonic signal, we introduce coded excitation into the ultrasonic imaging system, which is widely used in radar system. Coded excitation uses a long coded pulse to drive ultrasonic transducer, which can increase the average transmitting power accordingly. In this paper, in order to avoid the overlapping between adjacent echo, we used a four-figure Barker code to drive the ultrasonic transducer, which is modulated at the operating frequency of transducer to improve the emission efficiency. The implementation of coded excitation is closely associated with the transient operating characteristic of ultrasonic transducer. In this paper, the transient operating characteristic of ultrasonic transducer excited by a shock pulse δ(t) is firstly analyzed, and then the exciting pulse generated by special ultrasonic transmitting circuit composing of MD1211 and TC6320. In the final part of the paper, we designed an experiment to validate the coded excitation with transducer operating at 5MHz and a glass filled with ultrasonic coupling liquid as the object. Driven by a FPGA, the ultrasonic transmitting circuit output a four-figure Barker excitation pulse modulated at 5MHz, +/-20 voltage and is consistent with the transient operating characteristic of ultrasonic transducer after matched by matching circuit. The reflected echo from glass possesses coded character, which is identical with the simulating result by Matlab. Furthermore, the signal's amplitude is higher.

  2. In Vivo Imaging Reveals Composite Coding for Diagonal Motion in the Drosophila Visual System

    PubMed Central

    Zhou, Wei; Chang, Jin

    2016-01-01

    Understanding information coding is important for resolving the functions of visual neural circuits. The motion vision system is a classic model for studying information coding as it contains a concise and complete information-processing circuit. In Drosophila, the axon terminals of motion-detection neurons (T4 and T5) project to the lobula plate, which comprises four regions that respond to the four cardinal directions of motion. The lobula plate thus represents a topographic map on a transverse plane. This enables us to study the coding of diagonal motion by investigating its response pattern. By using in vivo two-photon calcium imaging, we found that the axon terminals of T4 and T5 cells in the lobula plate were activated during diagonal motion. Further experiments showed that the response to diagonal motion is distributed over the following two regions compared to the cardinal directions of motion—a diagonal motion selective response region and a non-selective response region—which overlap with the response regions of the two vector-correlated cardinal directions of motion. Interestingly, the sizes of the non-selective response regions are linearly correlated with the angle of the diagonal motion. These results revealed that the Drosophila visual system employs a composite coding for diagonal motion that includes both independent coding and vector decomposition coding. PMID:27695103

  3. Optimised Post-Exposure Image Sharpening Code for L3-CCD Detectors

    SciTech Connect

    Harding, Leon K.; Butler, Raymond F.; Redfern, R. Michael; Sheehan, Brendan J.; McDonald, James

    2008-02-22

    As light from celestial bodies traverses Earth's atmosphere, the wavefronts are distorted by atmospheric turbulence, thereby lowering the angular resolution of ground-based imaging. Rapid time-series imaging enables Post-Exposure Image Sharpening (PEIS) techniques, which employ shift-and-add frame registration to remove the tip-tilt component of the wavefront error--as well as telescope wobble, thus benefiting all observations. Further resolution gains are possible by selecting only frames with the best instantaneous seeing--a technique sometimes calling 'Lucky Imaging'. We implemented these techniques in the 1990s, with the TRIFFID imaging photon-counting camera, and its associated data reduction software. The software was originally written for time-tagged photon-list data formats, recorded by detectors such as the MAMA. This paper describes our deep re-structuring of the software to handle the 2-d FITS images produced by Low Light Level CCD (L3-CCD) cameras, which have sufficient time-series resolution (>30 Hz) for PEIS. As before, our code can perform straight frame co-addition, use composite reference stars, perform PEIS under several different algorithms to determine the tip/tilt shifts, store 'quality' and shift information for each frame, perform frame selection, and generate exposure-maps for photometric correction. In addition, new code modules apply all 'static' calibrations (bias subtraction, dark subtraction and flat-fielding) to the frames immediately prior to the other algorithms. A unique feature of our PEIS/Lucky Imaging code is the use of bidirectional wiener-filtering. Coupled with the far higher sensitivity of the L3-CCD over the previous TRIFFID detectors, much fainter reference stars and much narrower time windows can be used.

  4. Non-uniform Neutron Source Approximation for Iterative Reconstruction of Coded Source Images

    SciTech Connect

    Gregor, Jens; Bingham, Philip R

    2016-01-01

    X-ray and neutron optics both lack ray focusing capabilities. An x-ray source can be made small and powerful enough to facilitate high-resolution imaging while providing adequate flux. This is not yet possible for neutrons. One remedy is to employ a computational imaging technique such as magnified coded source imaging. The greatest challenge associated with successful reconstruction of high-resolution images from such radiographs is to precisely model the flux distribution for complex non-uniform neutron sources. We have developed a framework based on Monte Carlo simulation and iterative reconstruction that facilitates high- resolution coded source neutron imaging. In this paper, we define a methodology to empirically measure and approximate the flux profile of a non-uniform neutron source, and we show how to incorporate the result within the forward model of an iterative reconstruction algorithm. We assess improvement in image quality by comparing reconstructions based respectively on the new empirical forward model and our previous analytic models.

  5. Image subband coding using context-based classification and adaptive quantization.

    PubMed

    Yoo, Y; Ortega, A; Yu, B

    1999-01-01

    Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.

  6. Modelling of a novel x-ray phase contrast imaging technique based on coded apertures

    NASA Astrophysics Data System (ADS)

    Olivo, A.; Speller, R.

    2007-11-01

    X-ray phase contrast imaging is probably the most relevant among emerging x-ray imaging techniques, and it has the proven potential of revolutionizing the field of diagnostic radiology. Impressive images of a wide range of samples have been obtained, mostly at synchrotron radiation facilities. The necessity of relying on synchrotron radiation has prevented to a large extent a widespread diffusion of phase contrast imaging, thus precluding its transfer to clinical practice. A new technique, based on the use of coded apertures, was recently developed at UCL. This technique was demonstrated to provide intense phase contrast signals with conventional x-ray sources and detectors. Unlike other attempts at making phase contrast imaging feasible with conventional sources, the coded-aperture approach does not impose substantial limitations and/or filtering of the radiation beam, and it therefore allows, for the first time, exposures compatible with clinical practice. The technique has been thoroughly modelled, and this paper describes the technique in detail by going through the different steps of the modelling. All the main factors influencing image quality are discussed, alongside the viability of realizing a prototype suitable for clinical use. The model has been experimentally validated and a section of the paper shows the comparison between simulated and experimental results.

  7. Fractal reaction kinetics.

    PubMed

    Kopelman, R

    1988-09-23

    Classical reaction kinetics has been found to be unsatisfactory when the reactants are spatially constrained on the microscopic level by either walls, phase boundaries, or force fields. Recently discovered theories of heterogeneous reaction kinetics have dramatic consequences, such as fractal orders for elementary reactions, self-ordering and self-unmixing of reactants, and rate coefficients with temporal "memories." The new theories were needed to explain the results of experiments and supercomputer simulations of reactions that were confined to low dimensions or fractal dimensions or both. Among the practical examples of "fractal-like kinetics" are chemical reactions in pores of membranes, excitation trapping in molecular aggregates, exciton fusion in composite materials, and charge recombination in colloids and clouds.

  8. The Classification of HEp-2 Cell Patterns Using Fractal Descriptor.

    PubMed

    Xu, Rudan; Sun, Yuanyuan; Yang, Zhihao; Song, Bo; Hu, Xiaopeng

    2015-07-01

    Indirect immunofluorescence (IIF) with HEp-2 cells is considered as a powerful, sensitive and comprehensive technique for analyzing antinuclear autoantibodies (ANAs). The automatic classification of the HEp-2 cell images from IIF has played an important role in diagnosis. Fractal dimension can be used on the analysis of image representing and also on the property quantification like texture complexity and spatial occupation. In this study, we apply the fractal theory in the application of HEp-2 cell staining pattern classification, utilizing fractal descriptor firstly in the HEp-2 cell pattern classification with the help of morphological descriptor and pixel difference descriptor. The method is applied to the data set of MIVIA and uses the support vector machine (SVM) classifier. Experimental results show that the fractal descriptor combining with morphological descriptor and pixel difference descriptor makes the precisions of six patterns more stable, all above 50%, achieving 67.17% overall accuracy at best with relatively simple feature vectors.

  9. Image embedded coding with edge preservation based on local variance analysis for mobile applications

    NASA Astrophysics Data System (ADS)

    Luo, Gaoyong; Osypiw, David

    2006-02-01

    Transmitting digital images via mobile device is often subject to bandwidth which are incompatible with high data rates. Embedded coding for progressive image transmission has recently gained popularity in image compression community. However, current progressive wavelet-based image coders tend to send information on the lowest-frequency wavelet coefficients first. At very low bit rates, images compressed are therefore dominated by low frequency information, where high frequency components belonging to edges are lost leading to blurring the signal features. This paper presents a new image coder employing edge preservation based on local variance analysis to improve the visual appearance and recognizability of compressed images. The analysis and compression is performed by dividing an image into blocks. Fast lifting wavelet transform is developed with the advantages of being computationally efficient and boundary effects minimized by changing wavelet shape for handling filtering near the boundaries. A modified SPIHT algorithm with more bits used to encode the wavelet coefficients and transmitting fewer bits in the sorting pass for performance improvement, is implemented to reduce the correlation of the coefficients at scalable bit rates. Local variance estimation and edge strength measurement can effectively determine the best bit allocation for each block to preserve the local features by assigning more bits for blocks containing more edges with higher variance and edge strength. Experimental results demonstrate that the method performs well both visually and in terms of MSE and PSNR. The proposed image coder provides a potential solution with parallel computation and less memory requirements for mobile applications.

  10. Fractal dimensions of sinkholes

    NASA Astrophysics Data System (ADS)

    Reams, Max W.

    1992-05-01

    Sinkhole perimeters are probably fractals ( D=1.209-1.558) for sinkholes with areas larger than 10,000 m 2, based on area-perimeter plots of digitized data from karst surfaces developed on six geologic units in the United States. The sites in Florida, Kentucky, Indiana and Missouri were studied using maps with a scale of 1:24, 000. Size-number distributions of sinkhole perimeters and areas may also be fractal, although data for small sinkholes is needed for verification. Studies based on small-scale maps are needed to evaluate the number and roughness of small sinkhole populations.

  11. Viscous fingers on fractals

    NASA Astrophysics Data System (ADS)

    Meir, Yigal; Aharony, Amnon

    1989-05-01

    We investigate the problem of flow in porous media near the percolation threshold by studying the generelized model of Viscous Fingering (VF) on fractal structures. We obtain analytic expressions for the fractal dimensions of the resulting structures, which are in excellent agreement with existing experimental results, and exact relations for the exponent Dt, which describes the scaling of the time it takes the fluid to cross the sample, with the sample size, in terms of geometrical exponents for various experimental situations. Lastly, we discuss the relation between the continuous viscous fingers model and stochastic processes such as dielectric breakdown model (DBM) and diffusion limited aggregation (DLA).

  12. Hyperspectral image classification using a spectral-spatial sparse coding model

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Li, Jiang

    2013-10-01

    We present a sparse coding based spectral-spatial classification model for hyperspectral image (HSI) datasets. The proposed method consists of an efficient sparse coding method in which the l1/lq regularized multi-class logistic regression technique was utilized to achieve a compact representation of hyperspectral image pixels for land cover classification. We applied the proposed algorithm to a HSI dataset collected at the Kennedy Space Center and compared our algorithm to a recently proposed method, Gaussian process maximum likelihood (GP-ML) classifier. Experimental results show that the proposed method can achieve significantly better performances than the GP-ML classifier when training data is limited with a compact pixel representation, leading to more efficient HSI classification systems.

  13. Radiographic image sequence coding using adaptive finite-state vector quantization

    NASA Astrophysics Data System (ADS)

    Joo, Chang-Hee; Choi, Jong S.

    1990-11-01

    Vector quantization is an effective spatial domain image coding technique at under 1 . 0 bits per pixel. To achieve the quality at lower rates it is necessary to exploit spatial redundancy over a larger region of pixels than is possible with memoryless VQ. A fmite state vector quant. izer can achieve the same performance as memoryless VQ at lower rates. This paper describes an athptive finite state vector quantization for radiographic image sequence coding. Simulation experiment has been carried out with 4*4 blocks of pixels from a sequence of cardiac angiogram consisting of 40 frames of size 256*256pixels each. At 0. 45 bpp the resulting adaptive FSVQ encoder achieves performance comparable to earlier memoryless VQs at 0. 8 bpp.

  14. Asymmetric neural coding revealed by in vivo calcium imaging in the honey bee brain

    PubMed Central

    Rigosi, Elisa; Haase, Albrecht; Rath, Lisa; Anfora, Gianfranco; Vallortigara, Giorgio; Szyszka, Paul

    2015-01-01

    Left–right asymmetries are common properties of nervous systems. Although lateralized sensory processing has been well studied, information is lacking about how asymmetries are represented at the level of neural coding. Using in vivo functional imaging, we identified a population-level left–right asymmetry in the honey bee's primary olfactory centre, the antennal lobe (AL). When both antennae were stimulated via a frontal odour source, the inter-odour distances between neural response patterns were higher in the right than in the left AL. Behavioural data correlated with the brain imaging results: bees with only their right antenna were better in discriminating a target odour in a cross-adaptation paradigm. We hypothesize that the differences in neural odour representations in the two brain sides serve to increase coding capacity by parallel processing. PMID:25673679

  15. Measurements with Pinhole and Coded Aperture Gamma-Ray Imaging Systems

    SciTech Connect

    Raffo-Caiado, Ana Claudia; Solodov, Alexander A; Abdul-Jabbar, Najeb M; Hayward, Jason P; Ziock, Klaus-Peter

    2010-01-01

    From a safeguards perspective, gamma-ray imaging has the potential to reduce manpower and cost for effectively locating and monitoring special nuclear material. The purpose of this project was to investigate the performance of pinhole and coded aperture gamma-ray imaging systems at Oak Ridge National Laboratory (ORNL). With the aid of the European Commission Joint Research Centre (JRC), radiometric data will be combined with scans from a three-dimensional design information verification (3D-DIV) system. Measurements were performed at the ORNL Safeguards Laboratory using sources that model holdup in radiological facilities. They showed that for situations with moderate amounts of solid or dense U sources, the coded aperture was able to predict source location and geometry within ~7% of actual values while the pinhole gave a broad representation of source distributions

  16. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  17. Applications of the COG multiparticle Monte Carlo transport code to simulated imaging of complex objects

    SciTech Connect

    Buck, R M; Hall, J M

    1999-06-01

    COG is a major multiparticle simulation code in the LLNL Monte Carlo radiation transport toolkit. It was designed to solve deep-penetration radiation shielding problems in arbitrarily complex 3D geometries, involving coupled transport of photons, neutrons, and electrons. COG was written to provide as much accuracy as the underlying cross-sections will allow, and has a number of variance-reduction features to speed computations. Recently COG has been applied to the simulation of high- resolution radiographs of complex objects and the evaluation of contraband detection schemes. In this paper we will give a brief description of the capabilities of the COG transport code and show several examples of neutron and gamma-ray imaging simulations. Keywords: Monte Carlo, radiation transport, simulated radiography, nonintrusive inspection, neutron imaging.

  18. Post-processing for JPEG 2000 image coding using recursive line filtering based on a fuzzy control model

    NASA Astrophysics Data System (ADS)

    Yao, Susu; Rahardja, Susanto; Lin, Xiao; Lim, Keng Pang; Lu, Zhongkang

    2003-06-01

    In this paper, we propose a new method for removing coding artifacts appeared in JPEG 2000 coded images. The proposed method uses a fuzzy control model to control the weighting function for different image edges according to the gradient of pixels and membership functions. Regularized post-processing approach and recursive line algorithm are described in this paper. Experimental results demonstrate that the proposed algorithm can significantly improve image quality in terms of objective and subjective evaluation.

  19. An adaptive source-channel coding with feedback for progressive transmission of medical images.

    PubMed

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design.

  20. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  1. LineCast: line-based distributed coding and transmission for broadcasting satellite images.

    PubMed

    Wu, Feng; Peng, Xiulian; Xu, Jizheng

    2014-03-01

    In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction.

  2. Coded aperture imaging with self-supporting uniformly redundant arrays. [Patent application

    DOEpatents

    Fenimore, E.E.

    1980-09-26

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput.

  3. Robust rate-control for wavelet-based image coding via conditional probability models.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-03-01

    Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.

  4. Using locality-constrained linear coding in automatic target detection of HRS images

    NASA Astrophysics Data System (ADS)

    Rezaee, M.; Mirikharaji, Z.; Zhang, Y.

    2016-04-01

    Automatic target detection with complicated shapes in high spatial resolution images is an ongoing challenge in remote sensing image processing. This is because most methods use spectral or texture information, which are not sufficient for detecting complex shapes. In this paper, a new detection framework, based on Spatial Pyramid Matching (SPM) and Locality- constraint Linear Coding (LLC), is proposed to solve this problem, and exemplified using airplane shapes. The process starts with partitioning the image into sub-regions and generating a unique histogram for local features of each sub-region. Then, linear Support Vector Machines (SVMs) are used to detect objects based on a pyramid-matching kernel, which analyses the descriptors inside patches in different resolution. In order to generate the histogram, first a point feature detector (e.g. SIFT) is applied on the patches, and then a quantization process is used to select local features. In this step, the k-mean method is used in conjunction with the locality-constrained linear coding method. The LLC forces the coefficient matrix in the quantization process to be local and sparse as well. As a result, the speed of the method improves around 24 times in comparison to using sparse coding for quantization. Quantitative analysis also shows improvement in comparison to just using k-mean, but the accuracy in comparison to using sparse coding is similar. Rotation and shift of the desired object has no effect on the obtained results. The speed and accuracy of this algorithm for high spatial resolution images make it capable for use in real-world applications.

  5. Establishing a MOEMS process to realise microshutters for coded aperture imaging applications

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Davies, Rhodri R.; Johnson, Ashley; Price, Nicola; Bennett, Charlotte R.; Slinger, Christopher W.; Hardy, Busbee; Hames, Greg; Monk, Demaul; Rogers, Stanley

    2011-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations used a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. Recently applications have emerged in the visible and infra red bands for low cost lens-less imaging systems and system studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. Previously reported work focused on realising such a mask to operate in the mid-IR band based on polysilicon micro-optoelectro-mechanical systems (MOEMS) technology and its integration with ASIC drive electronics using a tiled approach to scale to large format masks. The MOEMS chips employ interference effects to modulate incident light - achieved by tuning a large array of asymmetric Fabry-Perot optical cavities via an applied voltage using row/column addressing. In this paper we report on establishing the manufacturing process for such MOEMS microshutter chips in a commercial MEMS foundry, MEMSCAP - including the associated challenges in moving the technology out of the development laboratory into manufacturing. Small scale (7.3 x 7.3mm) and full size (22 x 22mm) MOEMS chips have been produced that are equivalent to those produced at QinetiQ. Optical and electrical testing has shown that these are suitable for integration into large format reconfigurable masks for coded aperture imaging applications.

  6. JPEG image transmission over mobile network with an efficient channel coding and interleaving

    NASA Astrophysics Data System (ADS)

    El-Bendary, Mohsen A. M. Mohamed; Abou El-Azm, Atef E.; El-Fishawy, Nawal A.; Al-Hosarey, Farid Shawki M.; Eltokhy, Mostafa A. R.; Abd El-Samie, Fathi E.; Kazemian, H. B.

    2012-11-01

    This article studies improving of coloured JPEG image transmission over mobile wireless personal area network through the Bluetooth networks. This article uses many types of enhanced data rate and asynchronous connectionless packets. It presents a proposed chaotic interleaving technique for improving a transmission of coloured images over burst error environment through merging it with error control scheme. The computational complexity of the used different error control schemes is considered. A comparison study between different scenarios of the image transmission is held in to choose an effective technique. The simulation experiments are carried over the correlated fading channel using the widely accepted Jakes' model. Our experiments reveal that the proposed chaotic interleaving technique enhances quality of the received coloured image. Our simulation results show that the convolutional codes with longer constraint length are effective if its complexity is ignored. It reveals also that the standard error control scheme of old Bluetooth versions is ineffective in the case of coloured image transmission over mobile Bluetooth network. Finally, the proposed scenarios of the standard error control scheme with the chaotic interleaver perform better than the convolutional codes with reducing the complexity.

  7. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  8. Analysis and coding technique based on computational intelligence methods and image-understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-05-01

    Human vision involves higher-level knowledge and top-bottom processes for resolving ambiguity and uncertainty in the real images. Even very advanced low-level image processing can not give any advantages without a highly effective knowledge-representation and reasoning system that is the solution of image understanding problem. Methods of image analysis and coding are directly based on the methods of knowledge representation and processing. Article suggests such models and mechanisms in form of Spatial Turing Machine that in place of symbols and tapes works with hierarchical networks represented dually as discrete and continuous structures. Such networks are able to perform both graph and diagrammatic operations being the basis of intelligence. Computational intelligence methods provide transformation of continuous image information into the discrete structures, making it available for analysis. Article shows that symbols naturally emerge in such networks, giving opportunity to use symbolic operations. Such framework naturally combines methods of machine learning, classification and analogy with induction, deduction and other methods of higher level reasoning. Based on these principles image understanding system provides more flexible ways of handling with ambiguity and uncertainty in the real images and does not require supercomputers. That opens way to new technologies in the computer vision and image databases.

  9. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  10. Classical Liquids in Fractal Dimension.

    PubMed

    Heinen, Marco; Schnyder, Simon K; Brady, John F; Löwen, Hartmut

    2015-08-28

    We introduce fractal liquids by generalizing classical liquids of integer dimensions d=1,2,3 to a noninteger dimension dl. The particles composing the liquid are fractal objects and their configuration space is also fractal, with the same dimension. Realizations of our generic model system include microphase separated binary liquids in porous media, and highly branched liquid droplets confined to a fractal polymer backbone in a gel. Here, we study the thermodynamics and pair correlations of fractal liquids by computer simulation and semianalytical statistical mechanics. Our results are based on a model where fractal hard spheres move on a near-critical percolating lattice cluster. The predictions of the fractal Percus-Yevick liquid integral equation compare well with our simulation results.

  11. Epp: A C++ EGSnrc user code for x-ray imaging and scattering simulations

    SciTech Connect

    Lippuner, Jonas; Elbakri, Idris A.; Cui Congwu; Ingleby, Harry R.

    2011-03-15

    Purpose: Easy particle propagation (Epp) is a user code for the EGSnrc code package based on the C++ class library egspp. A main feature of egspp (and Epp) is the ability to use analytical objects to construct simulation geometries. The authors developed Epp to facilitate the simulation of x-ray imaging geometries, especially in the case of scatter studies. While direct use of egspp requires knowledge of C++, Epp requires no programming experience. Methods: Epp's features include calculation of dose deposited in a voxelized phantom and photon propagation to a user-defined imaging plane. Projection images of primary, single Rayleigh scattered, single Compton scattered, and multiple scattered photons may be generated. Epp input files can be nested, allowing for the construction of complex simulation geometries from more basic components. To demonstrate the imaging features of Epp, the authors simulate 38 keV x rays from a point source propagating through a water cylinder 12 cm in diameter, using both analytical and voxelized representations of the cylinder. The simulation generates projection images of primary and scattered photons at a user-defined imaging plane. The authors also simulate dose scoring in the voxelized version of the phantom in both Epp and DOSXYZnrc and examine the accuracy of Epp using the Kawrakow-Fippel test. Results: The results of the imaging simulations with Epp using voxelized and analytical descriptions of the water cylinder agree within 1%. The results of the Kawrakow-Fippel test suggest good agreement between Epp and DOSXYZnrc. Conclusions: Epp provides the user with useful features, including the ability to build complex geometries from simpler ones and the ability to generate images of scattered and primary photons. There is no inherent computational time saving arising from Epp, except for those arising from egspp's ability to use analytical representations of simulation geometries. Epp agrees with DOSXYZnrc in dose calculation, since

  12. Biophilic fractals and the visual journey of organic screen-savers.

    PubMed

    Taylor, R P; Sprott, J C

    2008-01-01

    Computers have led to the remarkable popularity of mathematically-generated fractal patterns. Fractals have also assumed a rapidly expanding role as an art form. Due to their growing impact on cultures around the world and their prevalence in nature, fractals constitute a central feature of our daily visual experiences throughout our lives. This intimate association raises a crucial question - does exposure to fractals have a positive impact on our mental and physical condition? This question raises the opportunity for readers of this journal to have some visual fun. Each year a different nonlinear inspired artist is featured on the front cover of the journal. This year, Scott Draves's fractal art works continues this tradition. In May 2007, we selected twenty of Draves's artworks and invited readers to vote for their favorites from this selection. The most popular images will feature on the front covers this year. In this article, we discuss fractal aesthetics and Draves's remarkable images.

  13. A Fractal Perspective on Scale in Geography

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Brandt, S.

    2016-06-01

    Scale is a fundamental concept that has attracted persistent attention in geography literature over the past several decades. However, it creates enormous confusion and frustration, particularly in the context of geographic information science, because of scale-related issues such as image resolution, and the modifiable areal unit problem (MAUP). This paper argues that the confusion and frustration mainly arise from Euclidean geometric thinking, with which locations, directions, and sizes are considered absolute, and it is time to reverse this conventional thinking. Hence, we review fractal geometry, together with its underlying way of thinking, and compare it to Euclidean geometry. Under the paradigm of Euclidean geometry, everything is measurable, no matter how big or small. However, geographic features, due to their fractal nature, are essentially unmeasurable or their sizes depend on scale. For example, the length of a coastline, the area of a lake, and the slope of a topographic surface are all scale-dependent. Seen from the perspective of fractal geometry, many scale issues, such as the MAUP, are inevitable. They appear unsolvable, but can be dealt with. To effectively deal with scale-related issues, we introduce topological and scaling analyses based on street-related concepts such as natural streets, street blocks, and natural cities. We further contend that spatial heterogeneity, or the fractal nature of geographic features, is the first and foremost effect of two spatial properties, because it is general and universal across all scales. Keywords: Scaling, spatial heterogeneity, conundrum of length, MAUP, topological analysis

  14. A Fractal Excursion.

    ERIC Educational Resources Information Center

    Camp, Dane R.

    1991-01-01

    After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…

  15. Focus on Fractals.

    ERIC Educational Resources Information Center

    Marks, Tim K.

    1992-01-01

    Presents a three-lesson unit that uses fractal geometry to measure the coastline of Massachusetts. Two lessons provide hands-on activities utilizing compass and grid methods to perform the measurements and the third lesson analyzes and explains the results of the activities. (MDH)

  16. Burgers Turbulence on a Fractal Fourier set

    NASA Astrophysics Data System (ADS)

    Buzzicotti, Michele; Biferale, Luca; Frisch, Uriel; Ray, Samriddhi

    2014-11-01

    We present a systematic investigation of the effects introduced by a fractal decimation in Fourier space on stochastically forced one-dimensional Burgers equations. The aim is to understand the statistical robustness of the shock singularity under different reductions of the number of the degrees of freedom. We perform a series of direct numerical simulations by using a pseudo-spectral code with resolution up to 16384 points and for various dimensions of the fractal set of Fourier modes DF <1. We present results concerning the scaling properties of statistical measures in real space and the probability distribution functions of local and non-local triads in Fourier space. Partially supported by ERC Grant No 339032.

  17. SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors

    SciTech Connect

    Joshi, S; Kaye, W; Jaworski, J; He, Z

    2015-06-15

    Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinhole camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for

  18. Design and implementation of a Cooke triplet based wave-front coded super-resolution imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2015-09-01

    Wave-front coding is a powerful technique that could be used to extend the DOF (depth of focus) of incoherent imaging system. It is the suitably designed phase mask that makes the system defocus invariant and it is the de-convolution algorithm that generates the clear image with large DOF. Compared with the traditional imaging system, the point spread function (PSF) in wave-front coded imaging system has quite a large support size and this characteristic makes wave-front coding be capable of realizing super-resolution imaging without replacing the current sensor with one of smaller pitch size. An amplification based single image super-resolution reconstruction procedure has been specifically designed for wave-front coded imaging system and its effectiveness has been demonstrated experimentally. A Cooke Triplet based wave-front coded imaging system is established. For a focal length of 50 mm and f-number 4.5, objects within the range [5 m, ∞] could be clearly imaged, which indicates a DOF extension ratio of approximately 20. At the same time, the proposed processing procedure could produce at least 3× resolution improvement, with the quality of the reconstructed super-resolution image approaching the diffraction limit.

  19. Segmentation of MR images via discriminative dictionary learning and sparse coding: application to hippocampus labeling.

    PubMed

    Tong, Tong; Wolz, Robin; Coupé, Pierrick; Hajnal, Joseph V; Rueckert, Daniel

    2013-08-01

    We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labeling approaches that rely on comparing image similarities between atlases and target images. In addition, we propose a Fixed Discriminative Dictionary Learning for Segmentation (F-DDLS) strategy, which can learn dictionaries offline and perform segmentations online, enabling a significant speed-up in the segmentation stage. The proposed method has been evaluated for the hippocampus segmentation of 80 healthy ICBM subjects and 202 ADNI images. The robustness of the proposed method, especially of our F-DDLS strategy, was validated by training and testing on different subject groups in the ADNI database. The influence of different parameters was studied and the performance of the proposed method was also compared with that of the nonlocal patch-based approach. The proposed method achieved a median Dice coefficient of 0.879 on 202 ADNI images and 0.890 on 80 ICBM subjects, which is competitive compared with state-of-the-art methods.

  20. A comparison of natural-image-based models of simple-cell coding.

    PubMed

    Willmore, B; Watters, P A; Tolhurst, D J

    2000-01-01

    Models such as that of Olshausen and Field (O&F, 1997 Vision Research 37 3311-3325) and principal components analysis (PCA) have been used to model simple-cell receptive fields, and to try to elucidate the statistical principles underlying visual coding in area V1. They connect the statistical structure of natural images with the statistical structure of the coding used in V1. The O&F model has created particular interest because the basis functions it produces resemble the receptive fields of simple cells. We evaluate these models in terms of their sparseness and dispersal, both of which have been suggested as desirable for efficient visual coding. However, both attributes have been defined ambiguously in the literature, and we have been obliged to formulate specific definitions in order to allow any comparison between models at all. We find that both attributes are strongly affected by any preprocessing (e.g. spectral pseudo-whitening or a logarithmic transformation) which is often applied to images before they are analysed by PCA or the O&F model. We also find that measures of sparseness are affected by the size of the filters--PCA filters with small receptive fields appear sparser than PCA filters with larger spatial extent. Finally, normalisation of the means and variances of filters influences measures of dispersal. It is necessary to control for all of these factors before making any comparisons between different models. Having taken these factors into account, we find that the code produced by the O&F model is somewhat sparser than the code produced by PCA. However, the difference is rather smaller than might have been expected, and a measure of dispersal is required to distinguish clearly between the two models. PMID:11144817

  1. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.

    PubMed

    Dos Santos, Serge; Prevorovsky, Zdenek

    2011-08-01

    Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health.

  2. Mobile phone imaging module with extended depth of focus based on axial irradiance equalization phase coding

    NASA Astrophysics Data System (ADS)

    Sung, Hsin-Yueh; Chen, Po-Chang; Chang, Chuan-Chung; Chang, Chir-Weei; Yang, Sidney S.; Chang, Horng

    2011-01-01

    This paper presents a mobile phone imaging module with extended depth of focus (EDoF) by using axial irradiance equalization (AIE) phase coding. From radiation energy transfer along optical axis with constant irradiance, the focal depth enhancement solution is acquired. We introduce the axial irradiance equalization phase coding to design a two-element 2-megapixel mobile phone lens for trade off focus-like aberrations such as field curvature, astigmatism and longitudinal chromatic defocus. The design results produce modulation transfer functions (MTF) and phase transfer functions (PTF) with substantially similar characteristics at different field and defocus positions within Nyquist pass band. Besides, the measurement results are shown. Simultaneously, the design results and measurement results are compared. Next, for the EDoF mobile phone camera imaging system, we present a digital decoding design method and calculate a minimum mean square error (MMSE) filter. Then, the filter is applied to correct the substantially similar blur image. Last, the blur and de-blur images are demonstrated.

  3. Fractal dimension based corneal fungal infection diagnosis

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Madhusudhanan; Perkins, A. Louise; Beuerman, Roger W.; Iyengar, S. Sitharama

    2006-08-01

    We present a fractal measure based pattern classification algorithm for automatic feature extraction and identification of fungus associated with an infection of the cornea of the eye. A white-light confocal microscope image of suspected fungus exhibited locally linear and branching structures. The pixel intensity variation across the width of a fungal element was gaussian. Linear features were extracted using a set of 2D directional matched gaussian-filters. Portions of fungus profiles that were not in the same focal plane appeared relatively blurred. We use gaussian filters of standard deviation slightly larger than the width of a fungus to reduce discontinuities. Cell nuclei of cornea and nerves also exhibited locally linear structure. Cell nuclei were excluded by their relatively shorter lengths. Nerves in the cornea exhibited less branching compared with the fungus. Fractal dimensions of the locally linear features were computed using a box-counting method. A set of corneal images with fungal infection was used to generate class-conditional fractal measure distributions of fungus and nerves. The a priori class-conditional densities were built using an adaptive-mixtures method to reflect the true nature of the feature distributions and improve the classification accuracy. A maximum-likelihood classifier was used to classify the linear features extracted from test corneal images as 'normal' or 'with fungal infiltrates', using the a priori fractal measure distributions. We demonstrate the algorithm on the corneal images with culture-positive fungal infiltrates. The algorithm is fully automatic and will help diagnose fungal keratitis by generating a diagnostic mask of locations of the fungal infiltrates.

  4. Fractal aggregates in tennis ball systems

    NASA Astrophysics Data System (ADS)

    Sabin, J.; Bandín, M.; Prieto, G.; Sarmiento, F.

    2009-09-01

    We present a new practical exercise to explain the mechanisms of aggregation of some colloids which are otherwise not easy to understand. We have used tennis balls to simulate, in a visual way, the aggregation of colloids under reaction-limited colloid aggregation (RLCA) and diffusion-limited colloid aggregation (DLCA) regimes. We have used the images of the cluster of balls, following Forrest and Witten's pioneering studies on the aggregation of smoke particles, to estimate their fractal dimension.

  5. A Hashing-Based Search Algorithm for Coding Digital Images by Vector Quantization

    NASA Astrophysics Data System (ADS)

    Chu, Chen-Chau

    1989-11-01

    This paper describes a fast algorithm to compress digital images by vector quantization. Vector quantization relies heavily on searching to build codebooks and to classify blocks of pixels into code indices. The proposed algorithm uses hashing, localized search, and multi-stage search to accelerate the searching process. The average of pixel values in a block is used as the feature for hashing and intermediate screening. Experimental results using monochrome images are presented. This algorithm compares favorably with other methods with regard to processing time, and has comparable or better mean square error measurements than some of them. The major advantages of the proposed algorithm are its speed, good quality of the reconstructed images, and flexibility.

  6. Fractal Particles: Titan's Thermal Structure and IR Opacity

    NASA Technical Reports Server (NTRS)

    McKay, C. P.; Rannou, P.; Guez, L.; Young, E. F.; DeVincenzi, Donald (Technical Monitor)

    1998-01-01

    Titan's haze particles are the principle opacity at solar wavelengths. Most past work in modeling these particles has assumed spherical particles. However, observational evidence strongly favors fractal shapes for the haze particles. We consider the implications of fractal particles for the thermal structure and near infrared opacity of Titan's atmosphere. We find that assuming fractal particles with the optical properties based on laboratory tholin material and with a production rate that allows for a match to the geometric albedo results in warmer troposphere and surface temperatures compared to spherical particles. In the near infrared (1-3 microns) the predicted opacity of the fractal particles is up to a factor of two less than for spherical particles. This has implications for the ability of Cassini to image Titan's surface at 1 micron.

  7. Joint source-channel coding with allpass filtering source shaping for image transmission over noisy channels

    NASA Astrophysics Data System (ADS)

    Cai, Jianfei; Chen, Chang W.

    2000-04-01

    In this paper, we proposed a fixed-length robust joint source- channel coding (JSCC) scheme for image transmission over noisy channels. Three channel models are studied: binary symmetric channels (BSC) and additive white Gaussian noise (AWGN) channels for memoryless channels, and Gilbert-Elliott channels (GEC) for bursty channels. We derive, in this research, an explicit operational rate-distortion (R-D) function, which represents an end-to-end error measurement that includes errors due to both quantization and channel noise. In particular, we are able to incorporate the channel transition probability and channel bit error rate into the R-D function in the case of bursty channels. With the operational R-D function, bits are allocated not only among different subsources, but also between source coding and channel coding so that, under a fixed transmission rate, an optimum tradeoff between source coding accuracy and channel error protection can be achieved. This JSCC scheme is also integrated with allpass filtering source shaping to further improve the robustness against channel errors. Experimental results show that the proposed scheme can achieve not only high PSNR performance, but also excellent perceptual quality. Compared with the state-of-the-art JSCC schemes, this proposed scheme outperforms most of them especially when the channel mismatch occurs.

  8. A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud

    PubMed Central

    Seenivasagam, V.; Velumani, R.

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)—Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks. PMID:23970943

  9. A QR code based zero-watermarking scheme for authentication of medical images in teleradiology cloud.

    PubMed

    Seenivasagam, V; Velumani, R

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.

  10. A QR code based zero-watermarking scheme for authentication of medical images in teleradiology cloud.

    PubMed

    Seenivasagam, V; Velumani, R

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks. PMID:23970943

  11. The fractal dimension of cell membrane correlates with its capacitance: A new fractal single-shell model

    SciTech Connect

    Wang Xujing; Becker, Frederick F.; Gascoyne, Peter R. C.

    2010-12-15

    The scale-invariant property of the cytoplasmic membrane of biological cells is examined by applying the Minkowski-Bouligand method to digitized scanning electron microscopy images of the cell surface. The membrane is found to exhibit fractal behavior, and the derived fractal dimension gives a good description of its morphological complexity. Furthermore, we found that this fractal dimension correlates well with the specific membrane dielectric capacitance derived from the electrorotation measurements. Based on these findings, we propose a new fractal single-shell model to describe the dielectrics of mammalian cells, and compare it with the conventional single-shell model (SSM). We found that while both models fit with experimental data well, the new model is able to eliminate the discrepancy between the measured dielectric property of cells and that predicted by the SSM.

  12. Prediction of image partitions using Fourier descriptors: application to segmentation-based coding schemes.

    PubMed

    Marqués, F; Llorens, B; Gasull, A

    1998-01-01

    This paper presents a prediction technique for partition sequences. It uses a region-by-region approach that consists of four steps: region parameterization, region prediction, region ordering, and partition creation. The time evolution of each region is divided into two types: regular motion and shape deformation. Both types of evolution are parameterized by means of the Fourier descriptors and they are separately predicted in the Fourier domain. The final predicted partition is built from the ordered combination of the predicted regions, using morphological tools. With this prediction technique, two different applications are addressed in the context of segmentation-based coding approaches. Noncausal partition prediction is applied to partition interpolation, and examples using complete partitions are presented. In turn, causal partition prediction is applied to partition extrapolation for coding purposes, and examples using complete partitions as well as sequences of binary images--shape information in video object planes (VOPs)--are presented. PMID:18276271

  13. Fractal dust grains in plasma

    SciTech Connect

    Huang, F.; Peng, R. D.; Liu, Y. H.; Chen, Z. Y.; Ye, M. F.; Wang, L.

    2012-09-15

    Fractal dust grains of different shapes are observed in a radially confined magnetized radio frequency plasma. The fractal dimensions of the dust structures in two-dimensional (2D) horizontal dust layers are calculated, and their evolution in the dust growth process is investigated. It is found that as the dust grains grow the fractal dimension of the dust structure decreases. In addition, the fractal dimension of the center region is larger than that of the entire region in the 2D dust layer. In the initial growth stage, the small dust particulates at a high number density in a 2D layer tend to fill space as a normal surface with fractal dimension D = 2. The mechanism of the formation of fractal dust grains is discussed.

  14. Fractals in geology and geophysics

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1989-01-01

    The definition of a fractal distribution is that the number of objects N with a characteristic size greater than r scales with the relation N of about r exp -D. The frequency-size distributions for islands, earthquakes, fragments, ore deposits, and oil fields often satisfy this relation. This application illustrates a fundamental aspect of fractal distributions, scale invariance. The requirement of an object to define a scale in photograhs of many geological features is one indication of the wide applicability of scale invariance to geological problems; scale invariance can lead to fractal clustering. Geophysical spectra can also be related to fractals; these are self-affine fractals rather than self-similar fractals. Examples include the earth's topography and geoid.

  15. The Calculation of Fractal Dimension in the Presence of Non-Fractal Clutter

    NASA Technical Reports Server (NTRS)

    Herren, Kenneth A.; Gregory, Don A.

    1999-01-01

    The area of information processing has grown dramatically over the last 50 years. In the areas of image processing and information storage the technology requirements have far outpaced the ability of the community to meet demands. The need for faster recognition algorithms and more efficient storage of large quantities of data has forced the user to accept less than lossless retrieval of that data for analysis. In addition to clutter that is not the object of interest in the data set, often the throughput requirements forces the user to accept "noisy" data and to tolerate the clutter inherent in that data. It has been shown that some of this clutter, both the intentional clutter (clouds, trees, etc) as well as the noise introduced on the data by processing requirements can be modeled as fractal or fractal-like. Traditional methods using Fourier deconvolution on these sources of noise in frequency space leads to loss of signal and can, in many cases, completely eliminate the target of interest. The parameters that characterize fractal-like noise (predominately the fractal dimension) have been investigated and a technique to reduce or eliminate noise from real scenes has been developed. Examples of clutter reduced images are presented.

  16. Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Korecki, P; Roszczynialski, T P; Sowa, K M

    2015-04-01

    In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.

  17. Reduction of blocking artifact based on edge information in discrete cosine transform-coded images

    NASA Astrophysics Data System (ADS)

    Kwon, Goo-Rak; Lama, Ramesh Kumar; Pyun, Jae-Young; Lee, Sang-Woong

    2011-09-01

    We propose new method for the reduction of blocking artifacts present in low bit-rate coded images. This algorithm performs the deblocking operation in two modes that are determined by the number of edge pixels around the block boundary. The number of edge pixels is calculated by applying the Roberts edge filter. An appropriate filtering operation is performed for each mode in both the horizontal and vertical directions. First, when the mode is associated with a smooth region, a strong filtering operation is applied because flat regions are more sensitive to the human visual system. In the second mode, an adaptive low-pass filter that is based on pixel behavior around the block boundaries is applied. This filter reduces the blocking artifact without introducing undesired blurring effects, while the original image edge is preserved. Although the proposed approach is simple and operates in the spatial domain, experimental results show that it improves both the subjective and objective qualities of the coded image with various features.

  18. Algorithm for Image Retrieval Based on Edge Gradient Orientation Statistical Code

    PubMed Central

    Zeng, Jiexian; Zhao, Yonggang; Li, Weiye

    2014-01-01

    Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results. PMID:24892074

  19. Langevin Equation on Fractal Curves

    NASA Astrophysics Data System (ADS)

    Satin, Seema; Gangal, A. D.

    2016-07-01

    We analyze random motion of a particle on a fractal curve, using Langevin approach. This involves defining a new velocity in terms of mass of the fractal curve, as defined in recent work. The geometry of the fractal curve, plays an important role in this analysis. A Langevin equation with a particular model of noise is proposed and solved using techniques of the Fα-Calculus.

  20. Fractal probability laws.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2008-06-01

    We explore six classes of fractal probability laws defined on the positive half-line: Weibull, Frechét, Lévy, hyper Pareto, hyper beta, and hyper shot noise. Each of these classes admits a unique statistical power-law structure, and is uniquely associated with a certain operation of renormalization. All six classes turn out to be one-dimensional projections of underlying Poisson processes which, in turn, are the unique fixed points of Poissonian renormalizations. The first three classes correspond to linear Poissonian renormalizations and are intimately related to extreme value theory (Weibull, Frechét) and to the central limit theorem (Lévy). The other three classes correspond to nonlinear Poissonian renormalizations. Pareto's law--commonly perceived as the "universal fractal probability distribution"--is merely a special case of the hyper Pareto class.

  1. Fractal rigidity in migraine

    NASA Astrophysics Data System (ADS)

    Latka, Miroslaw; Glaubic-Latka, Marta; Latka, Dariusz; West, Bruce J.

    2004-04-01

    We study the middle cerebral artery blood flow velocity (MCAfv) in humans using transcranial Doppler ultrasonography (TCD). Scaling properties of time series of the axial flow velocity averaged over a cardiac beat interval may be characterized by two exponents. The short time scaling exponent (STSE) determines the statistical properties of fluctuations of blood flow velocities in short-time intervals while the Hurst exponent describes the long-term fractal properties. In many migraineurs the value of the STSE is significantly reduced and may approach that of the Hurst exponent. This change in dynamical properties reflects the significant loss of short-term adaptability and the overall hyperexcitability of the underlying cerebral blood flow control system. We call this effect fractal rigidity.

  2. Fractal Analysis on Morphology of Laser Irradiated Vanadium Surfaces Under Different Ambient

    NASA Astrophysics Data System (ADS)

    Szkiva, Zs.; Bálint, Á. M.; Füle, M.; Nánai, L.

    2013-12-01

    Pulsed laser irradiated vanadium surface morphology under different ambient has been prepared and characterized using fractal dimension analysis method on scanning electron microscopy (SEM) images. In presence of different ambient, self-periodic and self-similar surface patterns (e.g. dots, islands, and pins) were grown and appeared in different shapes. The fractal dimension (FD) of this developed vanadium nanostructure was calculated by fractal box count method (FBM). The calculated fractal dimension (FD, Df) shows dependence on the different type on ambient and the number of laser shots.

  3. Fractal multifiber microchannel plates

    NASA Technical Reports Server (NTRS)

    Cook, Lee M.; Feller, W. B.; Kenter, Almus T.; Chappell, Jon H.

    1992-01-01

    The construction and performance of microchannel plates (MCPs) made using fractal tiling mehtods are reviewed. MCPs with 40 mm active areas having near-perfect channel ordering were produced. These plates demonstrated electrical performance characteristics equivalent to conventionally constructed MCPs. These apparently are the first MCPs which have a sufficiently high degree of order to permit single channel addressability. Potential applications for these devices and the prospects for further development are discussed.

  4. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis

    PubMed Central

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from

  5. Dimension of fractal basin boundaries

    SciTech Connect

    Park, B.S.

    1988-01-01

    In many dynamical systems, multiple attractors coexist for certain parameter ranges. The set of initial conditions that asymptotically approach each attractor is its basin of attraction. These basins can be intertwined on arbitrary small scales. Basin boundary can be either smooth or fractal. Dynamical systems that have fractal basin boundary show final state sensitivity of the initial conditions. A measure of this sensitivity (uncertainty exponent {alpha}) is related to the dimension of the basin boundary d = D - {alpha}, where D is the dimension of the phase space and d is the dimension of the basin boundary. At metamorphosis values of the parameter, there might happen a conversion from smooth to fractal basin boundary (smooth-fractal metamorphosis) or a conversion from fractal to another fractal basin boundary characteristically different from the previous fractal one (fractal-fractal metamorphosis). The dimension changes continuously with the parameter except at the metamorphosis values where the dimension of the basin boundary jumps discontinuously. We chose the Henon map and the forced damped pendulum to investigate this. Scaling of the basin volumes near the metamorphosis values of the parameter is also being studied for the Henon map. Observations are explained analytically by using low dimensional model map.

  6. Fractals in physiology and medicine

    NASA Technical Reports Server (NTRS)

    Goldberger, Ary L.; West, Bruce J.

    1987-01-01

    The paper demonstrates how the nonlinear concepts of fractals, as applied in physiology and medicine, can provide an insight into the organization of such complex structures as the tracheobronchial tree and heart, as well as into the dynamics of healthy physiological variability. Particular attention is given to the characteristics of computer-generated fractal lungs and heart and to fractal pathologies in these organs. It is shown that alterations in fractal scaling may underlie a number of pathophysiological disturbances, including sudden cardiac death syndromes.

  7. Investigating Fractal Geometry Using LOGO.

    ERIC Educational Resources Information Center

    Thomas, David A.

    1989-01-01

    Discusses dimensionality in Euclidean geometry. Presents methods to produce fractals using LOGO. Uses the idea of self-similarity. Included are program listings and suggested extension activities. (MVL)

  8. Fractal trace of earthworms

    NASA Astrophysics Data System (ADS)

    Burdzy, Krzysztof; Hołyst, Robert; Pruski, Łukasz

    2013-05-01

    We investigate a process of random walks of a point particle on a two-dimensional square lattice of size n×n with periodic boundary conditions. A fraction p⩽20% of the lattice is occupied by holes (p represents macroporosity). A site not occupied by a hole is occupied by an obstacle. Upon a random step of the walker, a number of obstacles, M, can be pushed aside. The system approaches equilibrium in (nlnn)2 steps. We determine the distribution of M pushed in a single move at equilibrium. The distribution F(M) is given by Mγ where γ=-1.18 for p=0.1, decreasing to γ=-1.28 for p=0.01. Irrespective of the initial distribution of holes on the lattice, the final equilibrium distribution of holes forms a fractal with fractal dimension changing from a=1.56 for p=0.20 to a=1.42 for p=0.001 (for n=4,000). The trace of a random walker forms a distribution with expected fractal dimension 2.

  9. Darwinian Evolution and Fractals

    NASA Astrophysics Data System (ADS)

    Carr, Paul H.

    2009-05-01

    Did nature's beauty emerge by chance or was it intelligently designed? Richard Dawkins asserts that evolution is blind aimless chance. Michael Behe believes, on the contrary, that the first cell was intelligently designed. The scientific evidence is that nature's creativity arises from the interplay between chance AND design (laws). Darwin's ``Origin of the Species,'' published 150 years ago in 1859, characterized evolution as the interplay between variations (symbolized by dice) and the natural selection law (design). This is evident in recent discoveries in DNA, Madelbrot's Fractal Geometry of Nature, and the success of the genetic design algorithm. Algorithms for generating fractals have the same interplay between randomness and law as evolution. Fractal statistics, which are not completely random, characterize such phenomena such as fluctuations in the stock market, the Nile River, rainfall, and tree rings. As chaos theorist Joseph Ford put it: God plays dice, but the dice are loaded. Thus Darwin, in discovering the evolutionary interplay between variations and natural selection, was throwing God's dice!

  10. Fully automatic catheter localization in C-arm images using ł1-sparse coding.

    PubMed

    Milletari, Fausto; Belagiannis, Vasileios; Navab, Nassir; Fallavollita, Pascal

    2014-01-01

    We propose a method to perform automatic detection and tracking of electrophysiology (EP) catheters in C-arm fluoroscopy sequences. Our approach does not require any initialization, is completely automatic, and can concurrently track an arbitrary number of overlapping catheters. After a pre-processing step, we employ sparse coding to first detect candidate catheter tips, and subsequently detect and track the catheters. The proposed technique is validated on 2835 C-arm images, which include 39,690 manually selected ground-truth catheter electrodes. Results demonstrated sub-millimeter detection accuracy and real-time tracking performances.

  11. The Use of Fractals for the Study of the Psychology of Perception:

    NASA Astrophysics Data System (ADS)

    Mitina, Olga V.; Abraham, Frederick David

    The present article deals with perception of time (subjective assessment of temporal intervals), complexity and aesthetic attractiveness of visual objects. The experimental research for construction of functional relations between objective parameters of fractals' complexity (fractal dimension and Lyapunov exponent) and subjective perception of their complexity was conducted. As stimulus material we used the program based on Sprott's algorithms for the generation of fractals and the calculation of their mathematical characteristics. For the research 20 fractals were selected which had different fractal dimensions that varied from 0.52 to 2.36, and the Lyapunov exponent from 0.01 to 0.22. We conducted two experiments: (1) A total of 20 fractals were shown to 93 participants. The fractals were displayed on the screen of a computer for randomly chosen time intervals ranging from 5 to 20 s. For each fractal displayed, the participant responded with a rating of the complexity and attractiveness of the fractal using ten-point scale with an estimate of the duration of the presentation of the stimulus. Each participant also answered the questions of some personality tests (Cattell and others). The main purpose of this experiment was the analysis of the correlation between personal characteristics and subjective perception of complexity, attractiveness, and duration of fractal's presentation. (2) The same 20 fractals were shown to 47 participants as they were forming on the screen of the computer for a fixed interval. Participants also estimated subjective complexity and attractiveness of fractals. The hypothesis on the applicability of the Weber-Fechner law for the perception of time, complexity and subjective attractiveness was confirmed for measures of dynamical properties of fractal images.

  12. Use of fluorescent proteins and color-coded imaging to visualize cancer cells with different genetic properties.

    PubMed

    Hoffman, Robert M

    2016-03-01

    Fluorescent proteins are very bright and available in spectrally-distinct colors, enable the imaging of color-coded cancer cells growing in vivo and therefore the distinction of cancer cells with different genetic properties. Non-invasive and intravital imaging of cancer cells with fluorescent proteins allows the visualization of distinct genetic variants of cancer cells down to the cellular level in vivo. Cancer cells with increased or decreased ability to metastasize can be distinguished in vivo. Gene exchange in vivo which enables low metastatic cancer cells to convert to high metastatic can be color-coded imaged in vivo. Cancer stem-like and non-stem cells can be distinguished in vivo by color-coded imaging. These properties also demonstrate the vast superiority of imaging cancer cells in vivo with fluorescent proteins over photon counting of luciferase-labeled cancer cells.

  13. Fractal analysis of bone structure with applications to osteoporosis and microgravity effects

    SciTech Connect

    Acharya, R.S.; Swarnarkar, V.; Krishnamurthy, R.; Hausman, E.; LeBlanc, A.; Lin, C.; Shackelford, L.

    1995-12-31

    The authors characterize the trabecular structure with the aid of fractal dimension. The authors use Alternating Sequential filters to generate a nonlinear pyramid for fractal dimension computations. The authors do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of the scheme is the rudimentary definition of self similarity. This allows them the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, the authors have shown that the ASF methods outperform other existing methods for fractal dimension estimation. They have shown that the fractal dimension remains the same when computed with both the X-Ray images and the MRI images of the patella. They have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, the authors have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, they have shown that the subject`s prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.

  14. Fractal analysis of Xylella fastidiosa biofilm formation

    NASA Astrophysics Data System (ADS)

    Moreau, A. L. D.; Lorite, G. S.; Rodrigues, C. M.; Souza, A. A.; Cotta, M. A.

    2009-07-01

    We have investigated the growth process of Xylella fastidiosa biofilms inoculated on a glass. The size and the distance between biofilms were analyzed by optical images; a fractal analysis was carried out using scaling concepts and atomic force microscopy images. We observed that different biofilms show similar fractal characteristics, although morphological variations can be identified for different biofilm stages. Two types of structural patterns are suggested from the observed fractal dimensions Df. In the initial and final stages of biofilm formation, Df is 2.73±0.06 and 2.68±0.06, respectively, while in the maturation stage, Df=2.57±0.08. These values suggest that the biofilm growth can be understood as an Eden model in the former case, while diffusion-limited aggregation (DLA) seems to dominate the maturation stage. Changes in the correlation length parallel to the surface were also observed; these results were correlated with the biofilm matrix formation, which can hinder nutrient diffusion and thus create conditions to drive DLA growth.

  15. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  16. Indexing the output points of an LBVQ used for image transform coding.

    PubMed

    Khataie, M; Soleymani, M R

    2000-01-01

    A new method for indexing the points of a lattice-based vector quantizer (LBVQ) used to quantize the DCT coefficients of images is presented. With this method, a large number of lattice points can be selected as codewords. As a result, the quality of the compressed data can be very high. The problem is that the large number of points results in a high bit rate. To reduce the bit rate, a shorter representation is assigned to the more frequently used lattice points. These points are grouped and a prefix code is used to index these lattice points. Our method outperforms JPEG, particularly, in the case of images with high frequency components. PMID:18255454

  17. Gamma ray imaging using coded aperture masks: a computer simulation approach.

    PubMed

    Jimenez, J; Olmos, P; Pablos, J L; Perez, J M

    1991-02-10

    The gamma-ray imaging using coded aperture masks as focusing elements is an extended technique for static position sensitive detectors. Several transfer functions have been proposed to implement mathematically the set of holes in the mask, the uniformly redundant array collimator being the most popular design. A considerable amount of work has been done to improve the digital methods to deconvolve the gamma-ray image, formed at the detector plane, with this transfer function. Here we present a study of the behavior of these techniques when applied to the geometric shadows produced by a set of point emitters. Comparison of the shape of the object reconstructed from these shadows with that resulting from the analytical reconstruction is performed, defining the validity ranges of the usual algorithmic approximations reported in the literature. Finally, several improvements are discussed. PMID:20582025

  18. Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera

    NASA Astrophysics Data System (ADS)

    Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu

    2016-09-01

    We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.

  19. Visible–infrared achromatic imaging by wavefront coding with wide-angle automobile camera

    NASA Astrophysics Data System (ADS)

    Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu

    2016-09-01

    We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.

  20. Sparse spike coding : applications of neuroscience to the processing of natural images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.

    2008-04-01

    If modern computers are sometimes superior to cognition in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing a relative or following an object in a complex background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the eciency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will particularly focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.

  1. Magnetic Resonance Image Phantom Code System to Calibrate in vivo Measurement Systems.

    SciTech Connect

    HICKMAN, DAVE

    1997-07-17

    Version 00 MRIPP provides relative calibration factors for the in vivo measurement of internally deposited photon emitting radionuclides within the human body. The code includes a database of human anthropometric structures (phantoms) that were constructed from whole body Magnetic Resonance Images. The database contains a large variety of human images with varying anatomical structure. Correction factors are obtained using Monte Carlo transport of photons through the voxel geometry of the phantom. Correction factors provided by MRIPP allow users of in vivo measurement systems (e.g., whole body counters) to calibrate these systems with simple sources and obtain subject specific calibrations. Note that the capability to format MRI data for use with this system is not included; therefore, one must use the phantom data included in this package. MRIPP provides a simple interface to perform Monte Carlo simulation of photon transport through the human body. MRIPP also provides anthropometric information (e.g., height, weight, etc.) for individuals used to generate the phantom database. A modified Voxel version of the Los Alamos National Laboratory MCNP4A code is used for the Monte Carlo simulation. The Voxel version Fortran patch to MCNP4 and MCNP4A (Monte Carlo N-Particle transport simulation) and the MCNP executable are included in this distribution, but the MCNP Fortran source is not included. It was distributed by RSICC as CCC-200 but is now obsoleted by the current release MCNP4B.

  2. Cellular-resolution population imaging reveals robust sparse coding in the Drosophila Mushroom Body

    PubMed Central

    Honegger, Kyle S.; Campbell, Robert A. A.; Turner, Glenn C.

    2011-01-01

    Sensory stimuli are represented in the brain by the activity of populations of neurons. In most biological systems, studying population coding is challenging since only a tiny proportion of cells can be recorded simultaneously. Here we used 2-photon imaging to record neural activity in the relatively simple Drosophila mushroom body (MB), an area involved in olfactory learning and memory. Using the highly sensitive calcium indicator, GCaMP3, we simultaneously monitored the activity of >100 MB neurons in vivo (about 5% of the total population). The MB is thought to encode odors in sparse patterns of activity, but the code has yet to be explored either on a population level or with a wide variety of stimuli. We therefore imaged responses to odors chosen to evaluate the robustness of sparse representations. Different odors activated distinct patterns of MB neurons, however we found no evidence for spatial organization of neurons by either response probability or odor tuning within the cell body layer. The degree of sparseness was consistent across a wide range of stimuli, from monomolecular odors to artificial blends and even complex natural smells. Sparseness was mainly invariant across concentrations, largely because of the influence of recent odor experience. Finally, in contrast to sensory processing in other systems, no response features distinguished natural stimuli from monomolecular odors. Our results indicate that the fundamental feature of odor processing in the MB is to create sparse stimulus representations in a format that facilitates arbitrary associations between odor and punishment or reward. PMID:21849538

  3. Magnetic Resonance Image Phantom Code System to Calibrate in vivo Measurement Systems.

    1997-07-17

    Version 00 MRIPP provides relative calibration factors for the in vivo measurement of internally deposited photon emitting radionuclides within the human body. The code includes a database of human anthropometric structures (phantoms) that were constructed from whole body Magnetic Resonance Images. The database contains a large variety of human images with varying anatomical structure. Correction factors are obtained using Monte Carlo transport of photons through the voxel geometry of the phantom. Correction factors provided bymore » MRIPP allow users of in vivo measurement systems (e.g., whole body counters) to calibrate these systems with simple sources and obtain subject specific calibrations. Note that the capability to format MRI data for use with this system is not included; therefore, one must use the phantom data included in this package. MRIPP provides a simple interface to perform Monte Carlo simulation of photon transport through the human body. MRIPP also provides anthropometric information (e.g., height, weight, etc.) for individuals used to generate the phantom database. A modified Voxel version of the Los Alamos National Laboratory MCNP4A code is used for the Monte Carlo simulation. The Voxel version Fortran patch to MCNP4 and MCNP4A (Monte Carlo N-Particle transport simulation) and the MCNP executable are included in this distribution, but the MCNP Fortran source is not included. It was distributed by RSICC as CCC-200 but is now obsoleted by the current release MCNP4B.« less

  4. Relationship between Fractal Dimension and Agreeability of Facial Imagery

    NASA Astrophysics Data System (ADS)

    Oyama-Higa, Mayumi; Miao, Tiejun; Ito, Tasuo

    2007-11-01

    Why do people feel happy and good or equivalently empathize more, with smiling face imageries than with ones of expressionless face? To understand what the essential factors are underlying imageries in relating to the feelings, we conducted an experiment by 84 subjects asked to estimate the degree of agreeability about expressionless and smiling facial images taken from 23 young persons to whom the subjects were no any pre-acquired knowledge. Images were presented one at a time to each subject who was asked to rank agreeability on a scale from 1 to 10. Fractal dimensions of facial images were obtained in order to characterize the complexity of the imageries by using of two types of fractal analysis methods, i.e., planar and cubic analysis methods, respectively. The results show a significant difference in the fractal dimension values between expressionless faces and smiling ones. Furthermore, we found a well correlation between the degree of agreeability and fractal dimensions, implying that the fractal dimension optically obtained in relation to complexity in imagery information is useful to characterize the psychological processes of cognition and awareness.

  5. Lossy cutset coding of bilevel images based on Markov random fields.

    PubMed

    Reyes, Matthew G; Neuhoff, David L; Pappas, Thrasyvoulos N

    2014-04-01

    An effective, low complexity method for lossy compression of scenic bilevel images, called lossy cutset coding, is proposed based on a Markov random field model. It operates by losslessly encoding pixels in a square grid of lines, which is a cutset with respect to a Markov random field model, and preserves key structural information, such as borders between black and white regions. Relying on the Markov random field model, the decoder takes a MAP approach to reconstructing the interior of each grid block from the pixels on its boundary, thereby creating a piecewise smooth image that is consistent with the encoded grid pixels. The MAP rule, which reduces to finding the block interiors with fewest black-white transitions, is directly implementable for the most commonly occurring block boundaries, thereby avoiding the need for brute force or iterative solutions. Experimental results demonstrate that the new method is computationally simple, outperforms the current lossy compression technique most suited to scenic bilevel images, and provides substantially lower rates than lossless techniques, e.g., JBIG, with little loss in perceived image quality.

  6. Image subband coding using an information-theoretic subband splitting criterion

    NASA Astrophysics Data System (ADS)

    Bayazit, Ulug; Pearlman, William A.

    1995-03-01

    It has been proved recently that for Gaussian sources with memory an ideal subband split will produce a coding gain for scalar or vector quantization of the subbands. Following the methodology of the proofs, we outline a method for successively splitting the subbands of a source, one at a time to obtain the largest coding gain. The subband with the largest theoretical rate reduction (TRR) is determined and split at each step of the decomposition process. The TRR is the difference between the rate in optimal encoding of N-tuples from a Gaussian source (or subband) and the rate for the same encoding of its subband decomposition. The TRR is a monotone increasing function of a so-called spectral flatness ratio, which involves the products of the eigenvalues of the source (subband) and subband decomposition covariance matrices of order N. These eigenvalues are estimated by the variances of the Discrete Cosine Transform, which approximates those of the optimal Karhunen Loeve Transform. After the subband decomposition hierarchy or tree is determined through the criterion of maximal TRR, each subband is encoded with a variable rate entropy constrained vector quantizer. Optimal rate allocation to subbands is done with the BFOS algorithm which does not require any source modelling. We demonstrate the benefit of using the criterion by comparing coding results on a two-level low-pass pyramidal decomposition with coding results on a two-level decomposition obtained using the criterion. For 60 MCFD (Motion Compensated Frame Difference) frames of the Salesman sequence an average rate- distortion advantage of 0.73 dB and 0.02 bpp and for 30 FD (Frame Difference) frames of Caltrain image sequence an average rate-distortion advantage of 0.41 dB and 0.013 bpp are obtained with the optimal decomposition over low-pass pyramidal decomposition.

  7. Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding

    PubMed Central

    Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947

  8. Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM

    NASA Astrophysics Data System (ADS)

    Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.

    2004-01-01

    In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a

  9. Application of fractal dimensions to study the structure of flocs formed in lime softening process.

    PubMed

    Vahedi, Arman; Gorczyca, Beata

    2011-01-01

    The use of fractal dimensions to study the internal structure and settling of flocs formed in lime softening process was investigated. Fractal dimensions of flocs were measured directly on floc images and indirectly from their settling velocity. An optical microscope with a motorized stage was used to measure the fractal dimensions of lime softening flocs directly on their images in 2 and 3D space. The directly determined fractal dimensions of the lime softening flocs were 1.11-1.25 for floc boundary, 1.82-1.99 for cross-sectional area and 2.6-2.99 for floc volume. The fractal dimension determined indirectly from the flocs settling rates was 1.87 that was different from the 3D fractal dimension determined directly on floc images. This discrepancy is due to the following incorrect assumptions used for fractal dimensions determined from floc settling rates: linear relationship between square settling velocity and floc size (Stokes' Law), Euclidean relationship between floc size and volume, constant fractal dimensions and one primary particle size describing entire population of flocs. Floc settling model incorporating variable floc fractal dimensions as well as variable primary particle size was found to describe the settling velocity of large (>50 μm) lime softening flocs better than Stokes' Law. Settling velocities of smaller flocs (<50 μm) could still be quite well predicted by Stokes' Law. The variation of fractal dimensions with lime floc size in this study indicated that two mechanisms are involved in the formation of these flocs: cluster-cluster aggregation for small flocs (<50 μm) and diffusion-limited aggregation for large flocs (>50 μm). Therefore, the relationship between the floc fractal dimension and floc size appears to be determined by floc formation mechanisms. PMID:20937512

  10. Fractals analysis of cardiac arrhythmias.

    PubMed

    Saeed, Mohammed

    2005-09-01

    Heart rhythms are generated by complex self-regulating systems governed by the laws of chaos. Consequently, heart rhythms have fractal organization, characterized by self-similar dynamics with long-range order operating over multiple time scales. This allows for the self-organization and adaptability of heart rhythms under stress. Breakdown of this fractal organization into excessive order or uncorrelated randomness leads to a less-adaptable system, characteristic of aging and disease. With the tools of nonlinear dynamics, this fractal breakdown can be quantified with potential applications to diagnostic and prognostic clinical assessment. In this paper, I review the methodologies for fractal analysis of cardiac rhythms and the current literature on their applications in the clinical context. A brief overview of the basic mathematics of fractals is also included. Furthermore, I illustrate the usefulness of these powerful tools to clinical medicine by describing a novel noninvasive technique to monitor drug therapy in atrial fibrillation.

  11. Terahertz spectroscopy of plasmonic fractals.

    PubMed

    Agrawal, A; Matsui, T; Zhu, W; Nahata, A; Vardeny, Z V

    2009-03-20

    We use terahertz time-domain spectroscopy to study the transmission properties of metallic films perforated with aperture arrays having deterministic or stochastic fractal morphologies ("plasmonic fractals"), and compare them with random aperture arrays. All of the measured plasmonic fractals show transmission resonances and antiresonances at frequencies that correspond to prominent features in their structure factors in k space. However, in sharp contrast to periodic aperture arrays, the resonant transmission enhancement decreases with increasing array size. This property is explained using a density-density correlation function, and is utilized for determining the underlying fractal dimensionality, D(<2). Furthermore, a sum rule for the transmission resonances and antiresonances in plasmonic fractals relative to the transmission of the corresponding random aperture arrays is obtained, and is shown to be universal.

  12. Measuring border irregularities of skin lesions using fractal dimensions

    NASA Astrophysics Data System (ADS)

    Ng, Vincent T. Y.; Lee, Tim K.

    1996-09-01

    Malignant melanoma is the most common cancer in people less than 35 years of age and incident rates are increasing by approximately 5 percent per annum in many white populations, including British Columbia, Canada. In 1994, a clinical study has been established to digitize melanocytic lesions under a controlled environment. Lesions are digitized from patients who are referred to the Colored Pigment Lesion Clinic in the University of British Columbia. In this paper, we investigate how to use fractal dimensions (FDs) in measuring the irregularity of a skin lesion. In a previous project, we have experimented with 6 different methods to calculate fractal dimensions on a small number of images of skin lesions, and the simple box-counting method performed the best. However, the method did not exploit the intensity information of the images. With the new set of images which are digitized under the controlled environment, we utilize the differential box counting method to exploit such information. Four FD measures, including the direct FD, the horizontal and the vertical smoothing FDs, and the multi- fractal dimension of order two, are calculated based on the original color images. In addition, these 4 FD features are repeatedly calculate for the blue band of the images. This paper reports the different features through the calculations of the fractal dimensions and compares their differentiation power in the use of diagnosis of images of skin lesions.

  13. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  14. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  15. Self-organized one-atom thick fractal nanoclusters via field-induced atomic transport

    NASA Astrophysics Data System (ADS)

    Batabyal, R.; Mahato, J. C.; Das, Debolina; Roy, Anupam; Dev, B. N.

    2013-08-01

    We report on the growth of a monolayer thick fractal nanostructures of Ag on flat-top Ag islands, grown on Si(111). Upon application of a voltage pulse at an edge of the flat-top Ag island from a scanning tunneling microscope tip, Ag atoms climb from the edge onto the top of the island. These atoms aggregate to form precisely one-atom thick nanostructures of fractal nature. The fractal (Hausdorff) dimension, DH = 1.75 ± 0.05, of this nanostructure has been determined by analyzing the morphology of the growing nanocluster, imaged by scanning tunneling microscopy, following the application of the voltage pulse. This value of the fractal dimension is consistent with the diffusion limited aggregation (DLA) model. We also determined two other fractal dimensions based on perimeter-radius-of-gyration (DP) and perimeter-area (D'P) relationship. Simulations of the DLA process, with varying sticking probability, lead to different cluster morphologies [P. Meakin, Phys. Rev. A 27, 1495 (1983)]; however, the value of DH is insensitive to this difference in morphology. We suggest that the morphology can be characterized by additional fractal dimension(s) DP and/or D'P, besides DH. We also show that within the DLA process DP = DH [C. Amitrano et al., Phys. Rev. A 40, 1713 (1989)] is only a special case; in general, DP and DH can be unequal. Characterization of fractal morphology is important for fractals in nanoelectronics, as fractal morphology would determine the electron transport behavior.

  16. Retinal Vascular Fractals and Cognitive Impairment

    PubMed Central

    Ong, Yi-Ting; Hilal, Saima; Cheung, Carol Yim-lui; Xu, Xin; Chen, Christopher; Venketasubramanian, Narayanaswamy; Wong, Tien Yin; Ikram, Mohammad Kamran

    2014-01-01

    Background Retinal microvascular network changes have been found in patients with age-related brain diseases such as stroke and dementia including Alzheimer's disease. We examine whether retinal microvascular network changes are also present in preclinical stages of dementia. Methods This is a cross-sectional study of 300 Chinese participants (age: ≥60 years) from the ongoing Epidemiology of Dementia in Singapore study who underwent detailed clinical examinations including retinal photography, brain imaging and neuropsychological testing. Retinal vascular parameters were assessed from optic disc-centered photographs using a semiautomated program. A comprehensive neuropsychological battery was administered, and cognitive function was summarized as composite and domain-specific Z-scores. Cognitive impairment no dementia (CIND) and dementia were diagnosed according to standard diagnostic criteria. Results Among 268 eligible nondemented participants, 78 subjects were categorized as CIND-mild and 69 as CIND-moderate. In multivariable adjusted models, reduced retinal arteriolar and venular fractal dimensions were associated with an increased risk of CIND-mild and CIND-moderate. Reduced fractal dimensions were associated with poorer cognitive performance globally and in the specific domains of verbal memory, visuoconstruction and visuomotor speed. Conclusion A sparser retinal microvascular network, represented by reduced arteriolar and venular fractal dimensions, was associated with cognitive impairment, suggesting that early microvascular damage may be present in preclinical stages of dementia. PMID:25298774

  17. Small animal imaging by single photon emission using pinhole and coded aperture collimation

    SciTech Connect

    Garibaldi, F.; Accorsi, R.; Cinti, M.N.; Colilli, S.; Cusanno, F.; De Vincentis, G.; Fortuna, A.; Girolami, B.; Giuliani, F.; Gricia, M.; Lanza, R.; Loizzo, A.; Loizzo, S.; Lucentini, M.; Majewski, S.; Santavenere, F.; Pani, R.; Pellegrini, R.; Signore, A.; Scopinaro, F.

    2005-06-01

    The aim of this paper is to investigate the basic properties and limits of the small animal imaging systems based on single photon detectors. The detectors for radio imaging of small animals are challenging because of the very high spatial resolution needed, possibly coupled with high efficiency to allow dynamic studies. These performances are hardly attainable with single photon technique because of the collimator that limits both spatial resolution and sensitivity. In this paper we describe a simple desktop detector based on pixellated NaI(Tl) scintillator array coupled with a pinhole collimator and a PSPMT, the Hamamatsu R2486. The limits of such systems as well as the way to overcome them will be shown. In fact better light sampling at the anode level would allow better pixel identification for higher number of pixel that is one of the parameters defining the image quality. Also the spatial resolution would improve. The performances of such layout are compared with others using PSPMTs differing from R2486 for the light sampling at the anode level and different areas. We show how a further step, namely the substitution of the pinhole collimator with a coded aperture, will allow a great improvement in system sensitivity while maintaining very good spatial resolution, possibly submillimetric. Calculations and simulations show that sensitivity would improve by a factor of 50.

  18. Selective error detection for error-resilient wavelet-based image coding.

    PubMed

    Karam, Lina J; Lam, Tuyet-Trang

    2007-12-01

    This paper introduces the concept of a similarity check function for error-resilient multimedia data transmission. The proposed similarity check function provides information about the effects of corrupted data on the quality of the reconstructed image. The degree of data corruption is measured by the similarity check function at the receiver, without explicit knowledge of the original source data. The design of a perceptual similarity check function is presented for wavelet-based coders such as the JPEG2000 standard, and used with a proposed "progressive similarity-based ARQ" (ProS-ARQ) scheme to significantly decrease the retransmission rate of corrupted data while maintaining very good visual quality of images transmitted over noisy channels. Simulation results with JPEG2000-coded images transmitted over the Binary Symmetric Channel, show that the proposed ProS-ARQ scheme significantly reduces the number of retransmissions as compared to conventional ARQ-based schemes. The presented results also show that, for the same number of retransmitted data packets, the proposed ProS-ARQ scheme can achieve significantly higher PSNR and better visual quality as compared to the selective-repeat ARQ scheme.

  19. Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1991-01-01

    A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.

  20. Inverted fractal analysis of TiOx thin layers grown by inverse pulsed laser deposition

    NASA Astrophysics Data System (ADS)

    Égerházi, L.; Smausz, T.; Bari, F.

    2013-08-01

    Inverted fractal analysis (IFA), a method developed for fractal analysis of scanning electron microscopy images of cauliflower-like thin films is presented through the example of layers grown by inverse pulsed laser deposition (IPLD). IFA uses the integrated fractal analysis module (FracLac) of the image processing software ImageJ, and an objective thresholding routine that preserves the characteristic features of the images, independently of their brightness and contrast. IFA revealed fD = 1.83 ± 0.01 for TiOx layers grown at 5-50 Pa background pressures. For a series of images, this result was verified by evaluating the scaling of the number of still resolved features on the film, counted manually. The value of fD not only confirms the fractal structure of TiOx IPLD thin films, but also suggests that the aggregation of plasma species in the gas atmosphere may have only limited contribution to the deposition.

  1. Fractals, Coherence and Brain Dynamics

    NASA Astrophysics Data System (ADS)

    Vitiello, Giuseppe

    2010-11-01

    I show that the self-similarity property of deterministic fractals provides a direct connection with the space of the entire analytical functions. Fractals are thus described in terms of coherent states in the Fock-Bargmann representation. Conversely, my discussion also provides insights on the geometrical properties of coherent states: it allows to recognize, in some specific sense, fractal properties of coherent states. In particular, the relation is exhibited between fractals and q-deformed coherent states. The connection with the squeezed coherent states is also displayed. In this connection, the non-commutative geometry arising from the fractal relation with squeezed coherent states is discussed and the fractal spectral properties are identified. I also briefly discuss the description of neuro-phenomenological data in terms of squeezed coherent states provided by the dissipative model of brain and consider the fact that laboratory observations have shown evidence that self-similarity characterizes the brain background activity. This suggests that a connection can be established between brain dynamics and the fractal self-similarity properties on the basis of the relation discussed in this report between fractals and squeezed coherent states. Finally, I do not consider in this paper the so-called random fractals, namely those fractals obtained by randomization processes introduced in their iterative generation. Since self-similarity is still a characterizing property in many of such random fractals, my conjecture is that also in such cases there must exist a connection with the coherent state algebraic structure. In condensed matter physics, in many cases the generation by the microscopic dynamics of some kind of coherent states is involved in the process of the emergence of mesoscopic/macroscopic patterns. The discussion presented in this paper suggests that also fractal generation may provide an example of emergence of global features, namely long range

  2. Illumination-compensated non-contact imaging photoplethysmography via dual-mode temporally coded illumination

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.

    2015-03-01

    Non-contact camera-based imaging photoplethysmography (iPPG) is useful for measuring heart rate in conditions where contact devices are problematic due to issues such as mobility, comfort, and sanitation. Existing iPPG methods analyse the light-tissue interaction of either active or passive (ambient) illumination. Many active iPPG methods assume the incident ambient light is negligible to the active illumination, resulting in high power requirements, while many passive iPPG methods assume near-constant ambient conditions. These assumptions can only be achieved in environments with controlled illumination and thus constrain the use of such devices. To increase the number of possible applications of iPPG devices, we propose a dual-mode active iPPG system that is robust to changes in ambient illumination variations. Our system uses a temporally-coded illumination sequence that is synchronized with the camera to measure both active and ambient illumination interaction for determining heart rate. By subtracting the ambient contribution, the remaining illumination data can be attributed to the controlled illuminant. Our device comprises a camera and an LED illuminant controlled by a microcontroller. The microcontroller drives the temporal code via synchronizing the frame captures and illumination time at the hardware level. By simulating changes in ambient light conditions, experimental results show our device is able to assess heart rate accurately in challenging lighting conditions. By varying the temporal code, we demonstrate the trade-off between camera frame rate and ambient light compensation for optimal blood pulse detection.

  3. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    NASA Astrophysics Data System (ADS)

    Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto

    2015-08-01

    We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.

  4. Fractal texture analysis of the healing process after bone loss.

    PubMed

    Borowska, Marta; Szarmach, Janusz; Oczeretko, Edward

    2015-12-01

    Radiological assessment of treatment effectiveness of guided bone regeneration (GBR) method in postresectal and postcystal bone loss cases, observed for one year. Group of 25 patients (17 females and 8 males) who underwent root resection with cystectomy were evaluated. The following combination therapy of intraosseous deficits was used, consisting of bone augmentation with xenogenic material together with covering regenerative membranes and tight wound closure. The bone regeneration process was estimated, comparing the images taken on the day of the surgery and 12 months later, by means of Kodak RVG 6100 digital radiography set. The interpretation of the radiovisiographic image depends on the evaluation ability of the eye looking at it, which leaves a large margin of uncertainty. So, several texture analysis techniques were developed and used sequentially on the radiographic image. For each method, the results were the mean from the 25 images. These methods compute the fractal dimension (D), each one having its own theoretic basis. We used five techniques for calculating fractal dimension: power spectral density method, triangular prism surface area method, blanket method, intensity difference scaling method and variogram analysis. Our study showed a decrease of fractal dimension during the healing process after bone loss. We also found evidence that various methods of calculating fractal dimension give different results. During the healing process after bone loss, the surfaces of radiographic images became smooth. The result obtained show that our findings may be of great importance for diagnostic purpose.

  5. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  6. Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume

    NASA Astrophysics Data System (ADS)

    McGregor, T. J.; Spence, D. J.; Coutts, D. W.

    2008-01-01

    We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500to850nm. We shape and disperse the continuum light to illuminate a measurement volume of 20×10×4mm3, in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.

  7. Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.

    PubMed

    McGregor, T J; Spence, D J; Coutts, D W

    2008-01-01

    We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.

  8. Combinatorial and chemotopic odorant coding in the zebrafish olfactory bulb visualized by optical imaging.

    PubMed

    Friedrich, R W; Korsching, S I

    1997-05-01

    Odors are thought to be represented by a distributed code across the glomerular modules in the olfactory bulb (OB). Here, we optically imaged presynaptic activity in glomerular modules of the zebrafish OB induced by a class of natural odorants (amino acids [AAs]) after labeling of primary afferents with a calcium-sensitive dye. AAs induce complex combinatorial patterns of active glomerular modules that are unique for different stimuli and concentrations. Quantitative analysis shows that defined molecular features of stimuli are correlated with activity in spatially confined groups of glomerular modules. These results provide direct evidence that identity and concentration of odorants are encoded by glomerular activity patterns and reveal a coarse chemotopic organization of the array of glomerular modules.

  9. The use of the fractional Fourier transform with coded excitation in ultrasound imaging.

    PubMed

    Bennett, Michael J; McLaughlin, Steve; Anderson, Tom; McDicken, Norman

    2006-04-01

    Medical ultrasound systems are limited by a tradeoff between axial resolution and the maximum imaging depth which may be achieved. The technique of coded excitation has been used extensively in the field of RADAR and SONAR for some time, but has only relatively recently been exploited in the area of medical ultrasound. This technique is attractive because allows the relationship between the pulse length and the maximum achievable spatial resolution to be changed. The work presented here explores the possibility of using the fractional Fourier transform as an effective means for the processing of signals received after the transmission of linear frequency modulated chirps. Results are presented which demonstrate that this technique is able to offer spatial resolutions similar to those obtained with a single cycle duration signal.

  10. Trabecular Bone Mechanical Properties and Fractal Dimension

    NASA Technical Reports Server (NTRS)

    Hogan, Harry A.

    1996-01-01

    Countermeasures for reducing bone loss and muscle atrophy due to extended exposure to the microgravity environment of space are continuing to be developed and improved. An important component of this effort is finite element modeling of the lower extremity and spinal column. These models will permit analysis and evaluation specific to each individual and thereby provide more efficient and effective exercise protocols. Inflight countermeasures and post-flight rehabilitation can then be customized and targeted on a case-by-case basis. Recent Summer Faculty Fellowship participants have focused upon finite element mesh generation, muscle force estimation, and fractal calculations of trabecular bone microstructure. Methods have been developed for generating the three-dimensional geometry of the femur from serial section magnetic resonance images (MRI). The use of MRI as an imaging modality avoids excessive exposure to radiation associated with X-ray based methods. These images can also detect trabecular bone microstructure and architecture. The goal of the current research is to determine the degree to which the fractal dimension of trabecular architecture can be used to predict the mechanical properties of trabecular bone tissue. The elastic modulus and the ultimate strength (or strain) can then be estimated from non-invasive, non-radiating imaging and incorporated into the finite element models to more accurately represent the bone tissue of each individual of interest. Trabecular bone specimens from the proximal tibia are being studied in this first phase of the work. Detailed protocols and procedures have been developed for carrying test specimens through all of the steps of a multi-faceted test program. The test program begins with MRI and X-ray imaging of the whole bones before excising a smaller workpiece from the proximal tibia region. High resolution MRI scans are then made and the piece further cut into slabs (roughly 1 cm thick). The slabs are X-rayed again

  11. Fractality and entropic scaling in the chromosomal distribution of conserved noncoding elements in the human genome.

    PubMed

    Polychronopoulos, Dimitris; Athanasopoulou, Labrini; Almirantis, Yannis

    2016-06-15

    Conserved non-coding elements (CNEs) are defined using various degrees of sequence identity and thresholds of minimal length. Their conservation frequently exceeds the one observed for protein-coding sequences. We explored the chromosomal distribution of different classes of CNEs in the human genome. We employed two methodologies: the scaling of block entropy and box-counting, with the aim to assess fractal characteristics of different CNE datasets. Both approaches converged to the conclusion that well-developed fractality is characteristic of elements that are either extremely conserved between species or are of ancient origin, i.e. conserved between distant organisms across evolution. Given that CNEs are often clustered around genes, we verified by appropriate gene masking that fractal-like patterns emerge even when elements found in proximity or inside genes are excluded. An evolutionary scenario is proposed, involving genomic events that might account for fractal distribution of CNEs in the human genome as indicated through numerical simulations.

  12. Exterior dimension of fat fractals

    NASA Technical Reports Server (NTRS)

    Grebogi, C.; Mcdonald, S. W.; Ott, E.; Yorke, J. A.

    1985-01-01

    Geometric scaling properties of fat fractal sets (fractals with finite volume) are discussed and characterized via the introduction of a new dimension-like quantity which is called the exterior dimension. In addition, it is shown that the exterior dimension is related to the 'uncertainty exponent' previously used in studies of fractal basin boundaries, and it is shown how this connection can be exploited to determine the exterior dimension. Three illustrative applications are described, two in nonlinear dynamics and one dealing with blood flow in the body. Possible relevance to porous materials and ballistic driven aggregation is also noted.

  13. Fractal analysis: A new remote sensing tool for lava flows

    NASA Technical Reports Server (NTRS)

    Bruno, B. C.; Taylor, G. J.; Rowland, S. K.; Lucey, P. G.; Self, S.

    1992-01-01

    Many important quantitative parameters have been developed that relate to the rheology and eruption and emplacement mechanics of lavas. This research centers on developing additional, unique parameters, namely the fractal properties of lava flows, to add to this matrix of properties. There are several methods of calculating the fractal dimension of a lava flow margin. We use the 'structured walk' or 'divider' method. In this method, we measure the length of a given lava flow margin by walking rods of different lengths along the margin. Since smaller rod lengths transverse more smaller-scaled features in the flow margin, the apparent length of the flow outline will increase as the length of the measuring rod decreases. By plotting the apparent length of the flow outline as a function of the length of the measuring rod on a log-log plot, fractal behavior can be determined. A linear trend on a log-log plot indicates that the data are fractal. The fractal dimension can then be calculated from the slope of the linear least squares fit line to the data. We use this 'structured walk' method to calculate the fractal dimension of many lava flows using a wide range of rod lengths, from 1/8 to 16 meters, in field studies of the Hawaiian islands. We also use this method to calculate fractal dimensions from aerial photographs of lava flows, using lengths ranging from 20 meters to over 2 kilometers. Finally, we applied this method to orbital images of extraterrestrial lava flows on Venus, Mars, and the Moon, using rod lengths up to 60 kilometers.

  14. An Evaluation of Fractal Surface Measurement Methods for Characterizing Landscape Complexity from Remote-Sensing Imagery

    NASA Technical Reports Server (NTRS)

    Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.

  15. A Comparative Study on Diagnostic Accuracy of Colour Coded Digital Images, Direct Digital Images and Conventional Radiographs for Periapical Lesions – An In Vitro Study

    PubMed Central

    Mubeen; K.R., Vijayalakshmi; Bhuyan, Sanat Kumar; Panigrahi, Rajat G; Priyadarshini, Smita R; Misra, Satyaranjan; Singh, Chandravir

    2014-01-01

    Objectives: The identification and radiographic interpretation of periapical bone lesions is important for accurate diagnosis and treatment. The present study was undertaken to study the feasibility and diagnostic accuracy of colour coded digital radiographs in terms of presence and size of lesion and to compare the diagnostic accuracy of colour coded digital images with direct digital images and conventional radiographs for assessing periapical lesions. Materials and Methods: Sixty human dry cadaver hemimandibles were obtained and periapical lesions were created in first and second premolar teeth at the junction of cancellous and cortical bone using a micromotor handpiece and carbide burs of sizes 2, 4 and 6. After each successive use of round burs, a conventional, RVG and colour coded image was taken for each specimen. All the images were evaluated by three observers. The diagnostic accuracy for each bur and image mode was calculated statistically. Results: Our results showed good interobserver (kappa > 0.61) agreement for the different radiographic techniques and for the different bur sizes. Conventional Radiography outperformed Digital Radiography in diagnosing periapical lesions made with Size two bur. Both were equally diagnostic for lesions made with larger bur sizes. Colour coding method was least accurate among all the techniques. Conclusion: Conventional radiography traditionally forms the backbone in the diagnosis, treatment planning and follow-up of periapical lesions. Direct digital imaging is an efficient technique, in diagnostic sense. Colour coding of digital radiography was feasible but less accurate however, this imaging technique, like any other, needs to be studied continuously with the emphasis on safety of patients and diagnostic quality of images. PMID:25584318

  16. Color-Coded Imaging of Breast Cancer Metastatic Niche Formation in Nude Mice.

    PubMed

    Suetsugu, Atsushi; Momiyama, Masashi; Hiroshima, Yukihiko; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M

    2015-12-01

    We report here a color-coded imaging model in which metastatic niches in the lung and liver of breast cancer can be identified. The transgenic green fluorescent protein (GFP)-expressing nude mouse was used as the host. The GFP nude mouse expresses GFP in all organs. However, GFP expression is dim in the liver parenchymal cells. Mouse mammary tumor cells (MMT 060562) (MMT), expressing red fluorescent protein (RFP), were injected in the tail vein of GFP nude mice to produce experimental lung metastasis and in the spleen of GFP nude mice to establish a liver metastasis model. Niche formation in the lung and liver metastasis was observed using very high resolution imaging systems. In the lung, GFP host-mouse cells accumulated around as few as a single MMT-RFP cell. In addition, GFP host cells were observed to form circle-shaped niches in the lung even without RFP cancer cells, which was possibly a niche in which future metastasis could be formed. In the liver, as with the lung, GFP host cells could form circle-shaped niches. Liver and lung metastases were removed surgically and cultured in vitro. MMT-RFP cells and GFP host cells resembling cancer-associated fibroblasts (CAFs) were observed interacting, suggesting that CAFs could serve as a metastatic niche.

  17. Analytical model for effect of temperature variation on PSF consistency in wavefront coding infrared imaging system

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong

    2016-05-01

    The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.

  18. Sparse coding based dense feature representation model for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang

    2015-11-01

    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.

  19. Map of fluid flow in fractal porous medium into fractal continuum flow.

    PubMed

    Balankin, Alexander S; Elizarraraz, Benjamin Espinoza

    2012-05-01

    This paper is devoted to fractal continuum hydrodynamics and its application to model fluid flows in fractally permeable reservoirs. Hydrodynamics of fractal continuum flow is developed on the basis of a self-consistent model of fractal continuum employing vector local fractional differential operators allied with the Hausdorff derivative. The generalized forms of Green-Gauss and Kelvin-Stokes theorems for fractional calculus are proved. The Hausdorff material derivative is defined and the form of Reynolds transport theorem for fractal continuum flow is obtained. The fundamental conservation laws for a fractal continuum flow are established. The Stokes law and the analog of Darcy's law for fractal continuum flow are suggested. The pressure-transient equation accounting the fractal metric of fractal continuum flow is derived. The generalization of the pressure-transient equation accounting the fractal topology of fractal continuum flow is proposed. The mapping of fluid flow in a fractally permeable medium into a fractal continuum flow is discussed. It is stated that the spectral dimension of the fractal continuum flow d(s) is equal to its mass fractal dimension D, even when the spectral dimension of the fractally porous or fissured medium is less than D. A comparison of the fractal continuum flow approach with other models of fluid flow in fractally permeable media and the experimental field data for reservoir tests are provided.

  20. Fractal Character of Titania Nanoparticles Formed by Laser Ablation

    SciTech Connect

    Musaev, O.; Midgley, A; Wrobel, J; Yan, J; Kruger, M

    2009-01-01

    Titania nanoparticles were fabricated by laser ablation of polycrystalline rutile in water at room temperature. The resulting nanoparticles were analyzed with x-ray diffraction, Raman spectroscopy, and transmission electron microscopy. The electron micrograph image of deposited nanoparticles demonstrates fractal properties.

  1. A Fractal Approach to Assess the Risks of Nitroamine Explosives

    NASA Astrophysics Data System (ADS)

    Song, Xiaolan; Li, Fengsheng; Wang, Yi; An, Chongwei; Wang, Jingyu; Zhang, Jinglin

    2012-01-01

    To the best of our knowledge, this work represents the first thermal conductivity theory for fractal energetic particle groups to combine fractal and hot-spot theories. We considered the influence of the fractal dimensions of particles on their thermal conductivity and even on the sensitivity of the explosive. Based on this theory, two types of nitroamine explosives (hexahydro-1,3,5-trinitro-1,3,5-triazine [RDX] and hexanitrohexaazaisowurtzitane [HNIW]) with different sizes, size distributions, and microscale morphologies were prepared using wet milling, solvent/nonsolvent, and ridding methods. The dependence of the explosive sensitivity on the fractal characteristics of the particles was investigated. Specifically, the size distributions and scanning electron microscopy (SEM) images of the samples were used to obtain the fractal dimension (D) and surface fractal dimension (Ds), respectively, by using a least-squares method and fractal image processing software (FIPS). The mechanical sensitivity and thermal stability of the samples were characterized using mechanical sensitivity tests and differential scanning calorimetry (DSC) and were further compared with the previous results upon the investigation about HMX (octahydro-1.3.5.7-tetranitro-1,3,5,7-tetrazocine). The results indicate that the sensitivity of nitroamine explosives largely depends on the fractal dimensions of the particles. Specifically, the sample with a higher D value is more insensitive to impact action, whereas the sample with a higher Ds value is more sensitive to friction action. In addition, the sample with both higher D and Ds values has less heat release and a slower rate of thermal decomposition. All of the above observations can be attributed to the alternation of the formation of hot spots that was controlled by heat mass and thermal conductivity due to the increase of D and Ds values caused by changes in parameters such as fine particle content, specific surface area, porosity content

  2. Fractals and dynamics in art and design.

    PubMed

    Guastello, Stephen J

    2015-01-01

    Many styles of visual art that build on fractal imagery and chaotic dynamics in the creative process have been examined in NDPLS in recent years. This article presents a gallery of artwork turned into design that appeared in the promotional products of the Society for Chaos Theory in Psychology & Life Sciences. The gallery showcases a variety of new imaging styles, including photography, that reflect a deepening perspective on nonlinear dynamics and art. The contributing artworks in design formats combine to render the verve that transcends the boundaries between the artistic and scientific communities.

  3. Fractal Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-09-01

    The Central Limit Theorem (CLT) and Extreme Value Theory (EVT) study, respectively, the stochastic limit-laws of sums and maxima of sequences of independent and identically distributed (i.i.d.) random variables via an affine scaling scheme. In this research we study the stochastic limit-laws of populations of i.i.d. random variables via nonlinear scaling schemes. The stochastic population-limits obtained are fractal Poisson processes which are statistically self-similar with respect to the scaling scheme applied, and which are characterized by two elemental structures: (i) a universal power-law structure common to all limits, and independent of the scaling scheme applied; (ii) a specific structure contingent on the scaling scheme applied. The sum-projection and the maximum-projection of the population-limits obtained are generalizations of the classic CLT and EVT results - extending them from affine to general nonlinear scaling schemes.

  4. Fractality of light's darkness.

    PubMed

    O'Holleran, Kevin; Dennis, Mark R; Flossmann, Florian; Padgett, Miles J

    2008-02-01

    Natural light fields are threaded by lines of darkness. For monochromatic light, the phenomenon is familiar in laser speckle, i.e., the black points that appear in the scattered light. These black points are optical vortices that extend as lines throughout the volume of the field. We establish by numerical simulations, supported by experiments, that these vortex lines have the fractal properties of a Brownian random walk. Approximately 73% of the lines percolate through the optical beam, the remainder forming closed loops. Our statistical results are similar to those of vortices in random discrete lattice models of cosmic strings, implying that the statistics of singularities in random optical fields exhibit universal behavior. PMID:18352372

  5. Bandwidth reduction of high-frequency sonar imagery in shallow water using content-adaptive hybrid image coding

    NASA Astrophysics Data System (ADS)

    Shin, Frances B.; Kil, David H.

    1998-09-01

    One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals

  6. Anomalous Diffusion in Fractal Globules

    NASA Astrophysics Data System (ADS)

    Tamm, M. V.; Nazarov, L. I.; Gavrilov, A. A.; Chertovich, A. V.

    2015-05-01

    The fractal globule state is a popular model for describing chromatin packing in eukaryotic nuclei. Here we provide a scaling theory and dissipative particle dynamics computer simulation for the thermal motion of monomers in the fractal globule state. Simulations starting from different entanglement-free initial states show good convergence which provides evidence supporting the existence of a unique metastable fractal globule state. We show monomer motion in this state to be subdiffusive described by ⟨X2(t )⟩˜tαF with αF close to 0.4. This result is in good agreement with existing experimental data on the chromatin dynamics, which makes an additional argument in support of the fractal globule model of chromatin packing.

  7. Thermodynamics of Photons on Fractals

    SciTech Connect

    Akkermans, Eric; Dunne, Gerald V.; Teplyaev, Alexander

    2010-12-03

    A thermodynamical treatment of a massless scalar field (a photon) confined to a fractal spatial manifold leads to an equation of state relating pressure to internal energy, PV{sub s}=U/d{sub s}, where d{sub s} is the spectral dimension and V{sub s} defines the 'spectral volume'. For regular manifolds, V{sub s} coincides with the usual geometric spatial volume, but on a fractal this is not necessarily the case. This is further evidence that on a fractal, momentum space can have a different dimension than position space. Our analysis also provides a natural definition of the vacuum (Casimir) energy of a fractal. We suggest ways that these unusual properties might be probed experimentally.

  8. Fractal analysis of Mesoamerican pyramids.

    PubMed

    Burkle-Elizondo, Gerardo; Valdez-Cepeda, Ricardo David

    2006-01-01

    A myth of ancient cultural roots was integrated into Mesoamerican cult, and the reference to architecture denoted a depth religious symbolism. The pyramids form a functional part of this cosmovision that is centered on sacralization. The space architecture works was an expression of the ideological necessities into their conception of harmony. The symbolism of the temple structures seems to reflect the mathematical order of the Universe. We contemplate two models of fractal analysis. The first one includes 16 pyramids. We studied a data set that was treated as a fractal profile to estimate the Df through variography (Dv). The estimated Fractal Dimension Dv = 1.383 +/- 0.211. In the second one we studied a data set to estimate the Dv of 19 pyramids and the estimated Fractal Dimension Dv = 1.229 +/- 0.165.

  9. Fractal identification of supercell storms

    NASA Astrophysics Data System (ADS)

    Féral, Laurent; Sauvageot, Henri

    2002-07-01

    The most intense and violent form of convective storm is the supercell storm, usually associated with heavy rain, hail, and destructive gusty winds, downbursts, and tornadoes. Identifying a storm cell as a supercell storm is not easy. What is shown here, from radar data, is that when an ordinary, or multicell storm evolves towards the supercellular organization, its fractal dimension is modified. Whereas the fractal dimension of the ordinary convective storms, including multicell thunderstorms, is observed around 1.35, in agreement with previous results, the fractal dimension of supercell storms is found close to 1.07. This low value is due to the unicellular character of supercells. The present paper suggests that the fractal dimension is a parameter that should be considered to analyse the dynamical organization of a convective field and to detect and identify the supercell storms, either isolated or among a population of convective storms.

  10. Anomalous diffusion in fractal globules.

    PubMed

    Tamm, M V; Nazarov, L I; Gavrilov, A A; Chertovich, A V

    2015-05-01

    The fractal globule state is a popular model for describing chromatin packing in eukaryotic nuclei. Here we provide a scaling theory and dissipative particle dynamics computer simulation for the thermal motion of monomers in the fractal globule state. Simulations starting from different entanglement-free initial states show good convergence which provides evidence supporting the existence of a unique metastable fractal globule state. We show monomer motion in this state to be subdiffusive described by ⟨X(2)(t)⟩∼t(αF) with αF close to 0.4. This result is in good agreement with existing experimental data on the chromatin dynamics, which makes an additional argument in support of the fractal globule model of chromatin packing. PMID:25978267

  11. Anomalous diffusion in fractal globules.

    PubMed

    Tamm, M V; Nazarov, L I; Gavrilov, A A; Chertovich, A V

    2015-05-01

    The fractal globule state is a popular model for describing chromatin packing in eukaryotic nuclei. Here we provide a scaling theory and dissipative particle dynamics computer simulation for the thermal motion of monomers in the fractal globule state. Simulations starting from different entanglement-free initial states show good convergence which provides evidence supporting the existence of a unique metastable fractal globule state. We show monomer motion in this state to be subdiffusive described by ⟨X(2)(t)⟩∼t(αF) with αF close to 0.4. This result is in good agreement with existing experimental data on the chromatin dynamics, which makes an additional argument in support of the fractal globule model of chromatin packing.

  12. Fractal electronic devices: simulation and implementation.

    PubMed

    Fairbanks, M S; McCarthy, D N; Scott, S A; Brown, S A; Taylor, R P

    2011-09-01

    Many natural structures have fractal geometries that exhibit useful functional properties. These properties, which exploit the recurrence of patterns at increasingly small scales, are often desirable in applications and, consequently, fractal geometry is increasingly employed in diverse technologies ranging from radio antennae to storm barriers. In this paper, we explore the application of fractal geometry to electrical devices. First, we lay the foundations for the implementation of fractal devices by considering diffusion-limited aggregation (DLA) of atomic clusters. Under appropriate growth conditions, atomic clusters of various elements form fractal patterns driven by DLA. We perform a fractal analysis of both simulated and physical devices to determine their spatial scaling properties and demonstrate their potential as fractal circuit elements. Finally, we simulate conduction through idealized and DLA fractal devices and show that their fractal scaling properties generate novel, nonlinear conduction properties in response to depletion by electrostatic gates. PMID:21841218

  13. Fractal electronic devices: simulation and implementation

    NASA Astrophysics Data System (ADS)

    Fairbanks, M. S.; McCarthy, D. N.; Scott, S. A.; Brown, S. A.; Taylor, R. P.

    2011-09-01

    Many natural structures have fractal geometries that exhibit useful functional properties. These properties, which exploit the recurrence of patterns at increasingly small scales, are often desirable in applications and, consequently, fractal geometry is increasingly employed in diverse technologies ranging from radio antennae to storm barriers. In this paper, we explore the application of fractal geometry to electrical devices. First, we lay the foundations for the implementation of fractal devices by considering diffusion-limited aggregation (DLA) of atomic clusters. Under appropriate growth conditions, atomic clusters of various elements form fractal patterns driven by DLA. We perform a fractal analysis of both simulated and physical devices to determine their spatial scaling properties and demonstrate their potential as fractal circuit elements. Finally, we simulate conduction through idealized and DLA fractal devices and show that their fractal scaling properties generate novel, nonlinear conduction properties in response to depletion by electrostatic gates.

  14. Fractal dynamics of bioconvective patterns

    NASA Technical Reports Server (NTRS)

    Noever, David A.

    1991-01-01

    Biologically generated cellular patterns, sometimes called bioconvective patterns, are found to cluster into aggregates which follow fractal growth dynamics akin to diffusion-limited aggregation (DLA) models. The pattern formed is self-similar with fractal dimension of 1.66 +/-0.038. Bioconvective DLA branching results from thermal roughening which shifts the balance between ordering viscous forces and disordering cell motility and random diffusion. The phase diagram for pattern morphology includes DLA, boundary spokes, random clusters, and reverse clusters.

  15. Fractal geometry-based classification approach for the recognition of lung cancer cells

    NASA Astrophysics Data System (ADS)

    Xia, Deshen; Gao, Wenqing; Li, Hua

    1994-05-01

    This paper describes a new fractal geometry based classification approach for the recognition of lung cancer cells, which is used in the health inspection for lung cancers, because cancer cells grow much faster and more irregularly than normal cells do, the shape of the segmented cancer cells is very irregular and considered as a graph without characteristic length. We use Texture Energy Intensity Rn to do fractal preprocessing to segment the cells from the image and to calculate the fractal dimention value for extracting the fractal features, so that we can get the figure characteristics of different cancer cells and normal cells respectively. Fractal geometry gives us a correct description of cancer-cell shapes. Through this method, a good recognition of Adenoma, Squamous, and small cancer cells can be obtained.

  16. Random walk through fractal environments.

    PubMed

    Isliker, H; Vlahos, L

    2003-02-01

    We analyze random walk through fractal environments, embedded in three-dimensional, permeable space. Particles travel freely and are scattered off into random directions when they hit the fractal. The statistical distribution of the flight increments (i.e., of the displacements between two consecutive hittings) is analytically derived from a common, practical definition of fractal dimension, and it turns out to approximate quite well a power-law in the case where the dimension D(F) of the fractal is less than 2, there is though, always a finite rate of unaffected escape. Random walks through fractal sets with D(F)< or =2 can thus be considered as defective Levy walks. The distribution of jump increments for D(F)>2 is decaying exponentially. The diffusive behavior of the random walk is analyzed in the frame of continuous time random walk, which we generalize to include the case of defective distributions of walk increments. It is shown that the particles undergo anomalous, enhanced diffusion for D(F)<2, the diffusion is dominated by the finite escape rate. Diffusion for D(F)>2 is normal for large times, enhanced though for small and intermediate times. In particular, it follows that fractals generated by a particular class of self-organized criticality models give rise to enhanced diffusion. The analytical results are illustrated by Monte Carlo simulations.

  17. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors

    NASA Astrophysics Data System (ADS)

    Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.

    The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several

  18. Integer DCT based on direct-lifting of DCT-IDCT for lossless-to-lossy image coding.

    PubMed

    Suzuki, Taizo; Ikehara, Masaaki

    2010-11-01

    A discrete cosine transform (DCT) can be easily implemented in software and hardware for the JPEG and MPEG formats. However, even though some integer DCTs (IntDCTs) for lossless-to-lossy image coding have been proposed, such transform requires redesigned devices. This paper proposes a hardware-friendly IntDCT that can be applied to both lossless and lossy coding. Our IntDCT is implemented by direct-lifting of DCT and inverse DCT (IDCT). Consequently, any existing DCT device can be directly applied to every lifting block. Although our method requires a small side information block (SIB), it is validated by its application to lossless-to-lossy image coding.

  19. Automatic choroid cells segmentation and counting based on approximate convexity and concavity of chain code in fluorescence microscopic image

    NASA Astrophysics Data System (ADS)

    Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu

    2015-03-01

    In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.

  20. In situ X-ray beam imaging using an off-axis magnifying coded aperture camera system.

    PubMed

    Kachatkou, Anton; Kyele, Nicholas; Scott, Peter; van Silfhout, Roelof

    2013-07-01

    An imaging model and an image reconstruction algorithm for a transparent X-ray beam imaging and position measuring instrument are presented. The instrument relies on a coded aperture camera to record magnified images of the footprint of the incident beam on a thin foil placed in the beam at an oblique angle. The imaging model represents the instrument as a linear system whose impulse response takes into account the image blur owing to the finite thickness of the foil, the shape and size of camera's aperture and detector's point-spread function. The image reconstruction algorithm first removes the image blur using the modelled impulse response function and then corrects for geometrical distortions caused by the foil tilt. The performance of the image reconstruction algorithm was tested in experiments at synchrotron radiation beamlines. The results show that the proposed imaging system produces images of the X-ray beam cross section with a quality comparable with images obtained using X-ray cameras that are exposed to the direct beam.

  1. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  2. A real-time photoacoustic and ultrasound dual-modality imaging system facilitated with GPU and code parallel optimization

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L.; Wang, Xueding; Liu, Xiaojun

    2014-03-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The backprojection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel was conducted to verify the performance of this system for imaging fast biological events. The GPU based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/pat realtime .

  3. Stand-Alone Front-End System for High-Frequency, High-Frame-Rate Coded Excitation Ultrasonic Imaging

    PubMed Central

    Park, Jinhyoung; Hu, Changhong; Shung, K. Kirk

    2012-01-01

    A stand-alone front-end system for high-frequency coded excitation imaging was implemented to achieve a wider dynamic range. The system included an arbitrary waveform amplifier, an arbitrary waveform generator, an analog receiver, a motor position interpreter, a motor controller and power supplies. The digitized arbitrary waveforms at a sampling rate of 150 MHz could be programmed and converted to an analog signal. The pulse was subsequently amplified to excite an ultrasound transducer, and the maximum output voltage level achieved was 120 Vpp. The bandwidth of the arbitrary waveform amplifier was from 1 to 70 MHz. The noise figure of the preamplifier was less than 7.7 dB and the bandwidth was 95 MHz. Phantoms and biological tissues were imaged at a frame rate as high as 68 frames per second (fps) to evaluate the performance of the system. During the measurement, 40-MHz lithium niobate (LiNbO3) single-element lightweight (<0.28 g) transducers were utilized. The wire target measurement showed that the −6-dB axial resolution of a chirp-coded excitation was 50 µm and lateral resolution was 120 µm. The echo signal-to-noise ratios were found to be 54 and 65 dB for the short burst and coded excitation, respectively. The contrast resolution in a sphere phantom study was estimated to be 24 dB for the chirp-coded excitation and 15 dB for the short burst modes. In an in vivo study, zebrafish and mouse hearts were imaged. Boundaries of the zebrafish heart in the image could be differentiated because of the low-noise operation of the implemented system. In mouse heart images, valves and chambers could be readily visualized with the coded excitation. PMID:23443698

  4. Fractal dimension analysis of malignant and benign endobronchial ultrasound nodes

    PubMed Central

    2014-01-01

    Background Endobronchial ultrasonography (EBUS) has been applied as a routine procedure for the diagnostic of hiliar and mediastinal nodes. The authors assessed the relationship between the echographic appearance of mediastinal nodes, based on endobronchial ultrasound images, and the likelihood of malignancy. Methods The images of twelve malignant and eleven benign nodes were evaluated. A previous processing method was applied to improve the quality of the images and to enhance the details. Texture and morphology parameters analyzed were: the image texture of the echographies and a fractal dimension that expressed the relationship between area and perimeter of the structures that appear in the image, and characterizes the convoluted inner structure of the hiliar and mediastinal nodes. Results Processed images showed that relationship between log perimeter and log area of hilar nodes was lineal (i.e. perimeter vs. area follow a power law). Fractal dimension was lower in the malignant nodes compared with non-malignant nodes (1.47(0.09), 1.53(0.10) mean(SD), Mann–Whitney U test p < 0.05)). Conclusion Fractal dimension of ultrasonographic images of mediastinal nodes obtained through endobronchial ultrasound differ in malignant nodes from non-malignant. This parameter could differentiate malignat and non-malignat mediastinic and hiliar nodes. PMID:24920158

  5. Small-angle scattering from fat fractals

    NASA Astrophysics Data System (ADS)

    Anitas, Eugen M.

    2014-06-01

    A number of experimental small-angle scattering (SAS) data are characterized by a succession of power-law decays with arbitrarily decreasing values of scattering exponents. To describe such data, here we develop a new theoretical model based on 3D fat fractals (sets with fractal structure, but nonzero volume) and show how one can extract structural information about the underlying fractal structure. We calculate analytically the monodisperse and polydisperse SAS intensity (fractal form factor and structure factor) of a newly introduced model of fat fractals and study its properties in momentum space. The system is a 3D deterministic mass fractal built on an extension of the well-known Cantor fractal. The model allows us to explain a succession of power-law decays and respectively, of generalized power-law decays (GPLD; superposition of maxima and minima on a power-law decay) with arbitrarily decreasing scattering exponents in the range from zero to three. We show that within the model, the present analysis allows us to obtain the edges of all the fractal regions in the momentum space, the number of fractal iteration and the fractal dimensions and scaling factors at each structural level in the fractal. We applied our model to calculate an analytical expression for the radius of gyration of the fractal. The obtained quantities characterizing the fat fractal are correlated to variation of scaling factor with the iteration number.

  6. Fractal processes in soil water retention

    SciTech Connect

    Tyler, S.W.; Wheatcraft, S.W. )

    1990-05-01

    The authors propose a physical conceptual model for soil texture and pore structure that is based on the concept of fractal geometry. The motivation for a fractal model of soil texture is that some particle size distributions in granular soils have already been shown to display self-similar scaling that is typical of fractal objects. Hence it is reasonable to expect that pore size distributions may also display fractal scaling properties. The paradigm that they used for the soil pore size distribution is the Sierpinski carpet, which is a fractal that contains self similar holes (or pores) over a wide range of scales. The authors evaluate the water retention properties of regular and random Sierpinski carpets and relate these properties directly to the Brooks and Corey (or Campbell) empirical water retention model. They relate the water retention curves directly to the fractal dimension of the Sierpinski carpet and show that the fractal dimension strongly controls the water retention properties of the Sierpinski carpet soil. Higher fractal dimensions are shown to mimic clay-type soils, with very slow dewatering characteristics and relatively low fractal dimensions are shown to mimic a sandy soil with relatively rapid dewatering characteristics. Their fractal model of soil water retention removes the empirical fitting parameters from the soil water retention models and provides paramters which are intrinsic to the nature of the fractal porous structure. The relative permeability functions of Burdine and Mualem are also shown to be fractal directly from fractal water retention results.

  7. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  8. Imaging a Population Code for Odor Identity in the Drosophila Mushroom Body

    PubMed Central

    Campbell, Robert A. A.; Honegger, Kyle S.; Qin, Hongtao; Li, Wanhe; Demir, Ebru

    2013-01-01

    The brain represents sensory information in the coordinated activity of neuronal ensembles. Although the microcircuits underlying olfactory processing are well characterized in Drosophila, no studies to date have examined the encoding of odor identity by populations of neurons and related it to the odor specificity of olfactory behavior. Here we used two-photon Ca2+ imaging to record odor-evoked responses from >100 neurons simultaneously in the Drosophila mushroom body (MB). For the first time, we demonstrate quantitatively that MB population responses contain substantial information on odor identity. Using a series of increasingly similar odor blends, we identified conditions in which odor discrimination is difficult behaviorally. We found that MB ensemble responses accounted well for olfactory acuity in this task. Kenyon cell ensembles with as few as 25 cells were sufficient to match behavioral discrimination accuracy. Using a generalization task, we demonstrated that the MB population code could predict the flies' responses to novel odors. The degree to which flies generalized a learned aversive association to unfamiliar test odors depended upon the relative similarity between the odors' evoked MB activity patterns. Discrimination and generalization place different demands on the animal, yet the flies' choices in these tasks were reliably predicted based on the amount of overlap between MB activity patterns. Therefore, these different behaviors can be understood in the context of a single physiological framework. PMID:23785169

  9. The topological insulator in a fractal space

    SciTech Connect

    Song, Zhi-Gang; Zhang, Yan-Yang; Li, Shu-Shen

    2014-06-09

    We investigate the band structures and transport properties of a two-dimensional model of topological insulator, with a fractal edge or a fractal bulk. A fractal edge does not affect the robust transport even when the fractal pattern has reached the resolution of the atomic-scale, because the bulk is still well insulating against backscattering. On the other hand, a fractal bulk can support the robust transport only when the fractal resolution is much larger than a critical size. Smaller resolution of bulk fractal pattern will lead to remarkable backscattering and localization, due to strong couplings of opposite edge states on narrow sub-edges which appear almost everywhere in the fractal bulk.

  10. Analysis of fractals with combined partition

    NASA Astrophysics Data System (ADS)

    Dedovich, T. G.; Tokarev, M. V.

    2016-03-01

    The space—time properties in the general theory of relativity, as well as the discreteness and non-Archimedean property of space in the quantum theory of gravitation, are discussed. It is emphasized that the properties of bodies in non-Archimedean spaces coincide with the properties of the field of P-adic numbers and fractals. It is suggested that parton showers, used for describing interactions between particles and nuclei at high energies, have a fractal structure. A mechanism of fractal formation with combined partition is considered. The modified SePaC method is offered for the analysis of such fractals. The BC, PaC, and SePaC methods for determining a fractal dimension and other fractal characteristics (numbers of levels and values of a base of forming a fractal) are considered. It is found that the SePaC method has advantages for the analysis of fractals with combined partition.

  11. Effects of fractal gating of potassium channels on neuronal behaviours

    NASA Astrophysics Data System (ADS)

    Zhao, De-Jiang; Zeng, Shang-You; Zhang, Zheng-Zhen

    2010-10-01

    The classical model of voltage-gated ion channels assumes that according to a Markov process ion channels switch among a small number of states without memory, but a bunch of experimental papers show that some ion channels exhibit significant memory effects, and this memory effects can take the form of kinetic rate constant that is fractal. Obviously the gating character of ion channels will affect generation and propagation of action potentials, furthermore, affect generation, coding and propagation of neural information. However, there is little previous research on this series of interesting issues. This paper investigates effects of fractal gating of potassium channel subunits switching from closed state to open state on neuronal behaviours. The obtained results show that fractal gating of potassium channel subunits switching from closed state to open state has important effects on neuronal behaviours, increases excitability, rest potential and spiking frequency of the neuronal membrane, and decreases threshold voltage and threshold injected current of the neuronal membrane. So fractal gating of potassium channel subunits switching from closed state to open state can improve the sensitivity of the neuronal membrane, and enlarge the encoded strength of neural information.

  12. Fractal and stochastic geometry inference for breast cancer: a case study with random fractal models and Quermass-interaction process.

    PubMed

    Hermann, Philipp; Mrkvička, Tomáš; Mattfeldt, Torsten; Minárová, Mária; Helisová, Kateřina; Nicolis, Orietta; Wartner, Fabian; Stehlík, Milan

    2015-08-15

    Fractals are models of natural processes with many applications in medicine. The recent studies in medicine show that fractals can be applied for cancer detection and the description of pathological architecture of tumors. This fact is not surprising, as due to the irregular structure, cancerous cells can be interpreted as fractals. Inspired by Sierpinski carpet, we introduce a flexible parametric model of random carpets. Randomization is introduced by usage of binomial random variables. We provide an algorithm for estimation of parameters of the model and illustrate theoretical and practical issues in generation of Sierpinski gaskets and Hausdorff measure calculations. Stochastic geometry models can also serve as models for binary cancer images. Recently, a Boolean model was applied on the 200 images of mammary cancer tissue and 200 images of mastopathic tissue. Here, we describe the Quermass-interaction process, which can handle much more variations in the cancer data, and we apply it to the images. It was found out that mastopathic tissue deviates significantly stronger from Quermass-interaction process, which describes interactions among particles, than mammary cancer tissue does. The Quermass-interaction process serves as a model describing the tissue, which structure is broken to a certain level. However, random fractal model fits well for mastopathic tissue. We provide a novel discrimination method between mastopathic and mammary cancer tissue on the basis of complex wavelet-based self-similarity measure with classification rates more than 80%. Such similarity measure relates to Hurst exponent and fractional Brownian motions. The R package FractalParameterEstimation is developed and introduced in the paper.

  13. Applications of Fractal Analytical Techniques in the Estimation of Operational Scale

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Quattrochi, Dale A.

    2000-01-01

    The observational scale and the resolution of remotely sensed imagery are essential considerations in the interpretation process. Many atmospheric, hydrologic, and other natural and human-influenced spatial phenomena are inherently scale dependent and are governed by different physical processes at different spatial domains. This spatial and operational heterogeneity constrains the ability to compare interpretations of phenomena and processes observed in higher spatial resolution imagery to similar interpretations obtained from lower resolution imagery. This is a particularly acute problem, since longterm global change investigations will require high spatial resolution Earth Observing System (EOS), Landsat 7, or commercial satellite data to be combined with lower resolution imagery from older sensors such as Landsat TM and MSS. Fractal analysis is a useful technique for identifying the effects of scale changes on remotely sensed imagery. The fractal dimension of an image is a non-integer value between two and three which indicates the degree of complexity in the texture and shapes depicted in the image. A true fractal surface exhibits self-similarity, a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution, and the slope of the fractal dimension-resolution relationship would be zero. Most geographical phenomena, however, are not self-similar at all scales, but they can be modeled by a stochastic fractal in which the scaling properties of the image exhibit patterns that can be described by statistics such as area-perimeter ratios and autocovariances. Stochastic fractal sets relax the self-similarity assumption and measure many scales and

  14. Embedding intensity image in grid-cross down-sampling (GCD) binary holograms based on block truncation coding

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.; Poon, T.-C.; Jiao, A. S. M.

    2013-09-01

    Past research has demonstrated that a three-dimensional (3D) intensity image can be preserved to a reasonable extent with a binary Fresnel hologram called the grid-cross down-sampling (GCD) binary hologram, if the intensity image is first down-sampled with a grid-cross lattice prior to the generation of the hologram. It has also been shown that the binary hologram generated with such means can be embedded with a binary image without causing observable artifact on the reconstructed image. Hence, the method can be further extended to embed an intensity image by binarizing it with error diffusion. Despite the favorable findings, the visual quality of the retrieved embedded intensity image from the hologram is rather poor. In this paper, we propose a method to overcome this problem. First, we employ the block truncation coding (BTC) to convert the intensity image into a binary bit stream. Next, the binary bit stream is embedded into the GCD binary hologram. The embedded image can be recovered with a BTC decoder, as well as a noise suppression scheme if the hologram is partially damaged. Experimental results demonstrate that with our proposed method, the visual quality of the embedded intensity image is superior to that of the existing approach, and the extracted image preserves favorably even if the binary hologram is damaged and contaminated with noise.

  15. Intensity JND comes from Poisson neural noise: implications for image coding

    NASA Astrophysics Data System (ADS)

    Allen, Jont B.

    2000-06-01

    While problems of image coding and audio coding have frequently been assumed to have similarities, specific sets of relationships have remained vague. One area where there should be a meaningful comparison is with central masking noise estimates, which define the codec's quantizer step size. In the past few years, progress has been made on this problem in the auditory domain (Allen and Neely, J. Acoust. Soc. Am., 102, 1997, 3628-46; Allen, 1999, Wiley Encyclopedia of Electrical and Electronics Engineering, Vol. 17, p. 422-437, Ed. Webster, J.G., John Wiley & Sons, Inc, NY). It is possible that some useful insights might now be obtained by comparing the auditory and visual cases. In the auditory case it has been shown, directly from psychophysical data, that below about 5 sones (a measure of loudness, a unit of psychological intensity), the loudness JND is proportional to the square root of the loudness (Delta) pounds sterling(pounds sterling) varies direct as (root)pounds sterling(I). This is true for both wideband noise and tones, having a frequency of 250 Hz or greater. Allen and Neely interpret this to mean that the internal noise is Poisson, as would be expected from neural point process noise. It follows directly that the Weber fraction (the relative loudness JND), decreases as one over the square root of the loudness, namely (Delta) pounds sterling/pounds sterling varies direct as 1/(root)pounds sterling. Above pounds sterling equals 5 sones, the relative loudness JND (Delta) pounds sterling/pounds sterling approximately equals 0.03 (i.e., Ekman law). It would be very interesting to know if this same relationship holds for the visual case between brightness (Beta) (I) and the brightness JND (Delta) (Beta) (I). This might be tested by measuring both the brightness JND and the brightness as a function of intensity, and transforming the intensity JND into a brightness JND, namely (Delta) (Beta) (I) equals (Beta) (I + (Delta) I) - (Beta) (I) approximately equals

  16. A Fractal Nature for Polymerized Laminin

    PubMed Central

    Hochman-Mendez, Camila; Cantini, Marco; Moratal, David; Salmeron-Sanchez, Manuel; Coelho-Sampaio, Tatiana

    2014-01-01

    Polylaminin (polyLM) is a non-covalent acid-induced nano- and micro-structured polymer of the protein laminin displaying distinguished biological properties. Polylaminin stimulates neuritogenesis beyond the levels achieved by ordinary laminin and has been shown to promote axonal regeneration in animal models of spinal cord injury. Here we used confocal fluorescence microscopy (CFM), scanning electron microscopy (SEM) and atomic force microscopy (AFM) to characterize its three-dimensional structure. Renderization of confocal optical slices of immunostained polyLM revealed the aspect of a loose flocculated meshwork, which was homogeneously stained by the antibody. On the other hand, an ordinary matrix obtained upon adsorption of laminin in neutral pH (LM) was constituted of bulky protein aggregates whose interior was not accessible to the same anti-laminin antibody. SEM and AFM analyses revealed that the seed unit of polyLM was a flat polygon formed in solution whereas the seed structure of LM was highly heterogeneous, intercalating rod-like, spherical and thin spread lamellar deposits. As polyLM was visualized at progressively increasing magnifications, we observed that the morphology of the polymer was alike independently of the magnification used for the observation. A search for the Hausdorff dimension in images of the two matrices showed that polyLM, but not LM, presented fractal dimensions of 1.55, 1.62 and 1.70 after 1, 8 and 12 hours of adsorption, respectively. Data in the present work suggest that the intrinsic fractal nature of polymerized laminin can be the structural basis for the fractal-like organization of basement membranes in the neurogenic niches of the central nervous system. PMID:25296244

  17. A New Fractal Model of Chromosome and DNA Processes

    NASA Astrophysics Data System (ADS)

    Bouallegue, K.

    Dynamic chromosome structure remains unknown. Can fractals and chaos be used as new tools to model, identify and generate a structure of chromosomes?Fractals and chaos offer a rich environment for exploring and modeling the complexity of nature. In a sense, fractal geometry is used to describe, model, and analyze the complex forms found in nature. Fractals have also been widely not only in biology but also in medicine. To this effect, a fractal is considered an object that displays self-similarity under magnification and can be constructed using a simple motif (an image repeated on ever-reduced scales).It is worth noting that the problem of identifying a chromosome has become a challenge to find out which one of the models it belongs to. Nevertheless, the several different models (a hierarchical coiling, a folded fiber, and radial loop) have been proposed for mitotic chromosome but have not reached a dynamic model yet.This paper is an attempt to solve topological problems involved in the model of chromosome and DNA processes. By combining the fractal Julia process and the numerical dynamical system, we have finally found out four main points. First, we have developed not only a model of chromosome but also a model of mitosis and one of meiosis. Equally important, we have identified the centromere position through the numerical model captured below. More importantly, in this paper, we have discovered the processes of the cell divisions of both mitosis and meiosis. All in all, the results show that this work could have a strong impact on the welfare of humanity and can lead to a cure of genetic diseases.

  18. FracMAP: A user-interactive package for performing simulation and orientation-specific morphology analysis of fractal-like solid nano-agglomerates

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Rajan K.; Garro, Mark A.; Chancellor, Shammah; Herald, Christopher; Moosmüller, Hans

    2009-08-01

    Computer simulation techniques have found extensive use in establishing empirical relationships between three-dimensional (3d) and two-dimensional (2d) projected properties of particles produced by the process of growth through the agglomeration of smaller particles (monomers). In this paper, we describe a package, FracMAP, that has been written to simulate 3d quasi-fractal agglomerates and create their 2d pixelated projection images by restricting them to stable orientations as commonly encountered for quasi-fractal agglomerates collected on filter media for electron microscopy. Resulting 2d images are analyzed for their projected morphological properties. Program summaryProgram title: FracMAP Catalogue identifier: AEDD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4722 No. of bytes in distributed program, including test data, etc.: 27 229 Distribution format: tar.gz Programming language: C++ Computer: PC Operating system: Windows, Linux RAM: 2.0 Megabytes Classification: 7.7 Nature of problem: Solving for a suitable fractal agglomerate construction under constraints of typical morphological parameters. Solution method: Monte Carlo approximation. Restrictions: Problem complexity is not representative of run-time, since Monte Carlo iterations are of a constant complexity. Additional comments: The distribution file contains two versions of the FracMAP code, one for Windows and one for Linux. Running time: 1 hour for a fractal agglomerate of size 25 on a single processor.

  19. Fractal analysis of time varying data

    DOEpatents

    Vo-Dinh, Tuan; Sadana, Ajit

    2002-01-01

    Characteristics of time varying data, such as an electrical signal, are analyzed by converting the data from a temporal domain into a spatial domain pattern. Fractal analysis is performed on the spatial domain pattern, thereby producing a fractal dimension D.sub.F. The fractal dimension indicates the regularity of the time varying data.

  20. Fractal Electronic Circuits Assembled From Nanoclusters

    NASA Astrophysics Data System (ADS)

    Fairbanks, M. S.; McCarthy, D.; Taylor, R. P.; Brown, S. A.

    2009-07-01

    Many patterns in nature can be described using fractal geometry. The effect of this fractal character is an array of properties that can include high internal connectivity, high dispersivity, and enhanced surface area to volume ratios. These properties are often desirable in applications and, consequently, fractal geometry is increasingly employed in technologies ranging from antenna to storm barriers. In this paper, we explore the application of fractal geometry to electrical circuits, inspired by the pervasive fractal structure of neurons in the brain. We show that, under appropriate growth conditions, nanoclusters of Sb form into islands on atomically flat substrates via a process close to diffusion-limited aggregation (DLA), establishing fractal islands that will form the basis of our fractal circuits. We perform fractal analysis of the islands to determine the spatial scaling properties (characterized by the fractal dimension, D) of the proposed circuits and demonstrate how varying growth conditions can affect D. We discuss fabrication approaches for establishing electrical contact to the fractal islands. Finally, we present fractal circuit simulations, which show that the fractal character of the circuit translates into novel, non-linear conduction properties determined by the circuit's D value.