Chaos-based encryption for fractal image coding
NASA Astrophysics Data System (ADS)
Yuen, Ching-Hung; Wong, Kwok-Wo
2012-01-01
A chaos-based cryptosystem for fractal image coding is proposed. The Rényi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fractal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Rényi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.
NASA Technical Reports Server (NTRS)
Barnsley, Michael F.; Sloan, Alan D.
1989-01-01
Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.
Fractal images induce fractal pupil dilations and constrictions.
Moon, P; Muday, J; Raynor, S; Schirillo, J; Boydston, C; Fairbanks, M S; Taylor, R P
2014-09-01
Fractals are self-similar structures or patterns that repeat at increasingly fine magnifications. Research has revealed fractal patterns in many natural and physiological processes. This article investigates pupillary size over time to determine if their oscillations demonstrate a fractal pattern. We predict that pupil size over time will fluctuate in a fractal manner and this may be due to either the fractal neuronal structure or fractal properties of the image viewed. We present evidence that low complexity fractal patterns underlie pupillary oscillations as subjects view spatial fractal patterns. We also present evidence implicating the autonomic nervous system's importance in these patterns. Using the variational method of the box-counting procedure we demonstrate that low complexity fractal patterns are found in changes within pupil size over time in millimeters (mm) and our data suggest that these pupillary oscillation patterns do not depend on the fractal properties of the image viewed. PMID:24978815
ERIC Educational Resources Information Center
Jurgens, Hartmut; And Others
1990-01-01
The production and application of images based on fractal geometry are described. Discussed are fractal language groups, fractal image coding, and fractal dialects. Implications for these applications of geometry to mathematics education are suggested. (CW)
Chang, H T; Kuo, C J
1998-03-10
An optical parallel architecture for the random-iteration algorithm to decode a fractal image by use of iterated-function system (IFS) codes is proposed. The code value is first converted into transmittance in film or a spatial light modulator in the optical part of the system. With an optical-to-electrical converter, electrical-to-optical converter, and some electronic circuits for addition and delay, we can perform the contractive affine transformation (CAT) denoted in IFS codes. In the proposed decoding architecture all CAT's generate points (image pixels) in parallel, and these points then are joined for display purposes. Therefore the decoding speed is improved greatly compared with existing serial-decoding architectures. In addition, an error and stability analysis that considers nonperfect elements is presented for the proposed optical system. Finally, simulation results are given to validate the proposed architecture. PMID:18268718
A Novel Fractal Coding Method Based on M-J Sets
Sun, Yuanyuan; Xu, Rudan; Chen, Lina; Kong, Ruiqing; Hu, Xiaopeng
2014-01-01
In this paper, we present a novel fractal coding method with the block classification scheme based on a shared domain block pool. In our method, the domain block pool is called dictionary and is constructed from fractal Julia sets. The image is encoded by searching the best matching domain block with the same BTC (Block Truncation Coding) value in the dictionary. The experimental results show that the scheme is competent both in encoding speed and in reconstruction quality. Particularly for large images, the proposed method can avoid excessive growth of the computational complexity compared with the traditional fractal coding algorithm. PMID:25010686
ERIC Educational Resources Information Center
Osler, Thomas J.
1999-01-01
Because fractal images are by nature very complex, it can be inspiring and instructive to create the code in the classroom and watch the fractal image evolve as the user slowly changes some important parameter or zooms in and out of the image. Uses programming language that permits the user to store and retrieve a graphics image as a disk file.…
Multispectral image fusion based on fractal features
NASA Astrophysics Data System (ADS)
Tian, Jie; Chen, Jie; Zhang, Chunhua
2004-01-01
Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the
Pyramidal fractal dimension for high resolution images
NASA Astrophysics Data System (ADS)
Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2016-07-01
Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024 ×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images.
Pyramidal fractal dimension for high resolution images.
Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2016-07-01
Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images. PMID:27475069
Fractality of pulsatile flow in speckle images
NASA Astrophysics Data System (ADS)
Nemati, M.; Kenjeres, S.; Urbach, H. P.; Bhattacharya, N.
2016-05-01
The scattering of coherent light from a system with underlying flow can be used to yield essential information about dynamics of the process. In the case of pulsatile flow, there is a rapid change in the properties of the speckle images. This can be studied using the standard laser speckle contrast and also the fractality of images. In this paper, we report the results of experiments performed to study pulsatile flow with speckle images, under different experimental configurations to verify the robustness of the techniques for applications. In order to study flow under various levels of complexity, the measurements were done for three in-vitro phantoms and two in-vivo situations. The pumping mechanisms were varied ranging from mechanical pumps to the human heart for the in vivo case. The speckle images were analyzed using the techniques of fractal dimension and speckle contrast analysis. The results of these techniques for the various experimental scenarios were compared. The fractal dimension is a more sensitive measure to capture the complexity of the signal though it was observed that it is also extremely sensitive to the properties of the scattering medium and cannot recover the signal for thicker diffusers in comparison to speckle contrast.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm. PMID:26890900
Estimating fractal dimension of medical images
NASA Astrophysics Data System (ADS)
Penn, Alan I.; Loew, Murray H.
1996-04-01
Box counting (BC) is widely used to estimate the fractal dimension (fd) of medical images on the basis of a finite set of pixel data. The fd is then used as a feature to discriminate between healthy and unhealthy conditions. We show that BC is ineffective when used on small data sets and give examples of published studies in which researchers have obtained contradictory and flawed results by using BC to estimate the fd of data-limited medical images. We present a new method for estimating fd of data-limited medical images. In the new method, fractal interpolation functions (FIFs) are used to generate self-affine models of the underlying image; each model, upon discretization, approximates the original data points. The fd of each FIF is analytically evaluated. The mean of the fds of the FIFs is the estimate of the fd of the original data. The standard deviation of the fds of the FIFs is a confidence measure of the estimate. The goodness-of-fit of the discretized models to the original data is a measure of self-affinity of the original data. In a test case, the new method generated a stable estimate of fd of a rib edge in a standard chest x-ray; box counting failed to generate a meaningful estimate of the same image.
Fractal image compression: A resolution independent representation for imagery
NASA Technical Reports Server (NTRS)
Sloan, Alan D.
1993-01-01
A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Smitha, K A; Gupta, A K; Jayasree, R S
2015-09-01
Glioma, the heterogeneous tumors originating from glial cells, generally exhibit varied grades and are difficult to differentiate using conventional MR imaging techniques. When this differentiation is crucial in the disease prognosis and treatment, even the advanced MR imaging techniques fail to provide a higher discriminative power for the differentiation of malignant tumor from benign ones. A powerful image processing technique applied to the imaging techniques is expected to provide a better differentiation. The present study focuses on the fractal analysis of fluid attenuation inversion recovery MR images, for the differentiation of glioma. For this, we have considered the most important parameters of fractal analysis, fractal dimension and lacunarity. While fractal analysis assesses the malignancy and complexity of a fractal object, lacunarity gives an indication on the empty space and the degree of inhomogeneity in the fractal objects. Box counting method with the preprocessing steps namely binarization, dilation and outlining was used to obtain the fractal dimension and lacunarity in glioma. Statistical analysis such as one-way analysis of variance and receiver operating characteristic (ROC) curve analysis helped to compare the mean and to find discriminative sensitivity of the results. It was found that the lacunarity of low and high grade gliomas vary significantly. ROC curve analysis between low and high grade glioma for fractal dimension and lacunarity yielded 70.3% sensitivity and 66.7% specificity and 70.3% sensitivity and 88.9% specificity, respectively. The study observes that fractal dimension and lacunarity increases with an increase in the grade of glioma and lacunarity is helpful in identifying most malignant grades. PMID:26305773
Kepler mission exoplanet transit data analysis using fractal imaging
NASA Astrophysics Data System (ADS)
Dehipawala, S.; Tremberger, G.; Majid, Y.; Holden, T.; Lieberman, D.; Cheung, T.
2012-10-01
The Kepler mission is designed to survey a fist-sized patch of the sky within the Milky Way galaxy for the discovery of exoplanets, with emphasis on near Earth-size exoplanets in or near the habitable zone. The Kepler space telescope would detect the brightness fluctuation of a host star and extract periodic dimming in the lightcurve caused by exoplanets that cross in front of their host star. The photometric data of a host star could be interpreted as an image where fractal imaging would be applicable. Fractal analysis could elucidate the incomplete data limitation posed by the data integration window. The fractal dimension difference between the lower and upper halves of the image could be used to identify anomalies associated with transits and stellar activity as the buried signals are expected to be in the lower half of such an image. Using an image fractal dimension resolution of 0.04 and defining the whole image fractal dimension as the Chi-square expected value of the fractal dimension, a p-value can be computed and used to establish a numerical threshold for decision making that may be useful in further studies of lightcurves of stars with candidate exoplanets. Similar fractal dimension difference approaches would be applicable to the study of photometric time series data via the Higuchi method. The correlated randomness of the brightness data series could be used to support inferences based on image fractal dimension differences. Fractal compression techniques could be used to transform a lightcurve image, resulting in a new image with a new fractal dimension value, but this method has been found to be ineffective for images with high information capacity. The three studied criteria could be used together to further constrain the Kepler list of candidate lightcurves of stars with possible exoplanets that may be planned for ground-based telescope confirmation.
Fractal dimension of cerebral surfaces using magnetic resonance images
Majumdar, S.; Prasad, R.R.
1988-11-01
The calculation of the fractal dimension of the surface bounded by the grey matter in the normal human brain using axial, sagittal, and coronal cross-sectional magnetic resonance (MR) images is presented. The fractal dimension in this case is a measure of the convolutedness of this cerebral surface. It is proposed that the fractal dimension, a feature that may be extracted from MR images, may potentially be used for image analysis, quantitative tissue characterization, and as a feature to monitor and identify cerebral abnormalities and developmental changes.
NASA Astrophysics Data System (ADS)
Tremberger, George, Jr.; Flamholz, A.; Cheung, E.; Sullivan, R.; Subramaniam, R.; Schneider, P.; Brathwaite, G.; Boteju, J.; Marchese, P.; Lieberman, D.; Cheung, T.; Holden, Todd
2007-09-01
The absorption effect of the back surface boundary of a diffuse layer was studied via laser generated reflection speckle pattern. The spatial speckle intensity provided by a laser beam was measured. The speckle data were analyzed in terms of fractal dimension (computed by NIH ImageJ software via the box counting fractal method) and weak localization theory based on Mie scattering. Bar code imaging was modeled as binary absorption contrast and scanning resolution in millimeter range was achieved for diffusive layers up to thirty transport mean free path thick. Samples included alumina, porous glass and chicken tissue. Computer simulation was used to study the effect of speckle spatial distribution and observed fractal dimension differences were ascribed to variance controlled speckle sizes. Fractal dimension suppressions were observed in samples that had thickness dimensions around ten transport mean free path. Computer simulation suggested a maximum fractal dimension of about 2 and that subtracting information could lower fractal dimension. The fractal dimension was shown to be sensitive to sample thickness up to about fifteen transport mean free paths, and embedded objects which modified 20% or more of the effective thickness was shown to be detectable. The box counting fractal method was supplemented with the Higuchi data series fractal method and application to architectural distortion mammograms was demonstrated. The use of fractals in diffusive analysis would provide a simple language for a dialog between optics experts and mammography radiologists, facilitating the applications of laser diagnostics in tissues.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.
1998-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and
Analysis on correlation imaging based on fractal interpolation
NASA Astrophysics Data System (ADS)
Li, Bailing; Zhang, Wenwen; Chen, Qian; Gu, Guohua
2015-10-01
One fractal interpolation algorithm has been discussed in detail and the statistical self-similarity characteristics of light field have been analized in correlated experiment. For the correlation imaging experiment in condition of low sampling frequent, an image analysis approach based on fractal interpolation algorithm is proposed. This approach aims to improve the resolution of original image which contains a fewer number of pixels and highlight the image contour feature which is fuzzy. By using this method, a new model for the light field has been established. For the case of different moments of the intensity in the receiving plane, the local field division also has been established and then the iterated function system based on the experimental data set can be obtained by choosing the appropriate compression ratio under a scientific error estimate. On the basis of the iterative function, an explicit fractal interpolation function expression is given out in this paper. The simulation results show that the correlation image reconstructed by fractal interpolation has good approximations to the original image. The number of pixels of image after interpolation is significantly increased. This method will effectively solve the difficulty of image pixel deficiency and significantly improved the outline of objects in the image. The rate of deviation as the parameter has been adopted in the paper in order to evaluate objectively the effect of the algorithm. To sum up, fractal interpolation method proposed in this paper not only keeps the overall image but also increases the local information of the original image.
Blind Detection of Region Duplication Forgery Using Fractal Coding and Feature Matching.
Jenadeleh, Mohsen; Ebrahimi Moghaddam, Mohsen
2016-05-01
Digital image forgery detection is important because of its wide use in applications such as medical diagnosis, legal investigations, and entertainment. Copy-move forgery is one of the famous techniques, which is used in region duplication. Many of the existing copy-move detection algorithms cannot effectively blind detect duplicated regions that are made by powerful image manipulation software like Photoshop. In this study, a new method is proposed for blind detecting manipulations in digital images based on modified fractal coding and feature vector matching. The proposed method not only detects typical copy-move forgery, but also finds multiple copied forgery regions for images that are subjected to rotation, scaling, reflection, and a mixture of these postprocessing operations. The proposed method is robust against tampered images undergoing attacks such as Gaussian blurring, contrast scaling, and brightness adjustment. The experimental results demonstrated the validity and efficiency of the method. PMID:27122398
Evaluation of Two Fractal Methods for Magnetogram Image Analysis
NASA Technical Reports Server (NTRS)
Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.
1997-01-01
Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.
Liver ultrasound image classification by using fractal dimension of edge
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita
2012-08-01
Medical ultrasound image edge detection is an important component in increasing the number of application of segmentation, and hence it has been subject of many studies in the literature. In this study, we have classified the liver ultrasound images (US) combining Canny and Sobel edge detectors with fractal analysis in order to provide an indicator about of the US images roughness. We intend to provide a classification rule of the focal liver lesions as: cirrhotic liver, liver hemangioma and healthy liver. For edges detection the Canny and Sobel operators were used. Fractal analyses have been applied for texture analysis and classification of focal liver lesions according to fractal dimension (FD) determined by using the Box Counting method. To assess the performance and accuracy rate of the proposed method the contrast-to-noise (CNR) is analyzed.
Wideband Fractal Antennas for Holographic Imaging and Rectenna Applications
Bunch, Kyle J.; McMakin, Douglas L.; Sheen, David M.
2008-04-18
At Pacific Northwest National Laboratory, wideband antenna arrays have been successfully used to reconstruct three-dimensional images at microwave and millimeter-wave frequencies. Applications of this technology have included portal monitoring, through-wall imaging, and weapons detection. Fractal antennas have been shown to have wideband characteristics due to their self-similar nature (that is, their geometry is replicated at different scales). They further have advantages in providing good characteristics in a compact configuration. We discuss the application of fractal antennas for holographic imaging. Simulation results will be presented. Rectennas are a specific class of antennas in which a received signal drives a nonlinear junction and is retransmitted at either a harmonic frequency or a demodulated frequency. Applications include tagging and tracking objects with a uniquely-responding antenna. It is of interest to consider fractal rectenna because the self-similarity of fractal antennas tends to make them have similar resonance behavior at multiples of the primary resonance. Thus, fractal antennas can be suited for applications in which a signal is reradiated at a harmonic frequency. Simulations will be discussed with this application in mind.
Fractal image perception provides novel insights into hierarchical cognition.
Martins, M J; Fischmeister, F P; Puig-Waldmüller, E; Oh, J; Geissler, A; Robinson, S; Fitch, W T; Beisteiner, R
2014-08-01
Hierarchical structures play a central role in many aspects of human cognition, prominently including both language and music. In this study we addressed hierarchy in the visual domain, using a novel paradigm based on fractal images. Fractals are self-similar patterns generated by repeating the same simple rule at multiple hierarchical levels. Our hypothesis was that the brain uses different resources for processing hierarchies depending on whether it applies a "fractal" or a "non-fractal" cognitive strategy. We analyzed the neural circuits activated by these complex hierarchical patterns in an event-related fMRI study of 40 healthy subjects. Brain activation was compared across three different tasks: a similarity task, and two hierarchical tasks in which subjects were asked to recognize the repetition of a rule operating transformations either within an existing hierarchical level, or generating new hierarchical levels. Similar hierarchical images were generated by both rules and target images were identical. We found that when processing visual hierarchies, engagement in both hierarchical tasks activated the visual dorsal stream (occipito-parietal cortex, intraparietal sulcus and dorsolateral prefrontal cortex). In addition, the level-generating task specifically activated circuits related to the integration of spatial and categorical information, and with the integration of items in contexts (posterior cingulate cortex, retrosplenial cortex, and medial, ventral and anterior regions of temporal cortex). These findings provide interesting new clues about the cognitive mechanisms involved in the generation of new hierarchical levels as required for fractals. PMID:24699014
Wideband fractal antennas for holographic imaging and rectenna applications
NASA Astrophysics Data System (ADS)
Bunch, Kyle J.; McMakin, Douglas L.; Sheen, David M.
2008-04-01
At Pacific Northwest National Laboratory, wideband antenna arrays have been successfully used to reconstruct three-dimensional images at microwave and millimeter-wave frequencies. Applications of this technology have included portal monitoring, through-wall imaging, and weapons detection. Fractal antennas have been shown to have wideband characteristics due to their self-similar nature (that is, their geometry is replicated at different scales). They further have advantages in providing good characteristics in a compact configuration. We discuss the application of fractal antennas for holographic imaging. Simulation results will be presented. Rectennas are a specific class of antennas in which a received signal drives a nonlinear junction and is retransmitted at either a harmonic frequency or a demodulated frequency. Applications include tagging and tracking objects with a uniquely-responding antenna. It is of interest to consider fractal rectenna because the self-similarity of fractal antennas tends to make them have similar resonance behavior at multiples of the primary resonance. Thus, fractal antennas can be suited for applications in which a signal is reradiated at a harmonic frequency. Simulations will be discussed with this application in mind.
Fractal image analysis - Application to the topography of Oregon and synthetic images.
NASA Technical Reports Server (NTRS)
Huang, Jie; Turcotte, Donald L.
1990-01-01
Digitized topography for the state of Oregon has been used to obtain maps of fractal dimension and roughness amplitude. The roughness amplitude correlates well with variations in relief and is a promising parameter for the quantitative classification of landforms. The spatial variations in fractal dimension are low and show no clear correlation with different tectonic settings. For Oregon the mean fractal dimension from a two-dimensional spectral analysis is D = 2.586, and for a one-dimensional spectral analysis the mean fractal dimension is D = 1.487, which is close to the Brown noise value D = 1.5. Synthetic two-dimensional images have also been generated for a range of D values. For D = 2.6, the synthetic image has a mean one-dimensional spectral fractal dimension D = 1.58, which is consistent with the results for Oregon. This approach can be easily applied to any digitzed image that obeys fractal statistics.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Quattrochi, Dale A.; Luvall, Jeffrey C.
1997-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely scale-independent. Self-similarity is a property of curves or surfaces where each part is indistinguishable from the whole. The fractal dimension D of remote sensing data yields quantitative insight on the spatial complexity and information content contained within these data. Analyses of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed(l0 to 80 meters). The forested scene behaves as one would expect-larger pixel sizes decrease the complexity of the image as individual clumps of trees are averaged into larger blocks. The increased complexity of the agricultural image with increasing pixel size results from the loss of homogeneous groups of pixels in the large fields to mixed pixels composed of varying combinations of NDVI values that correspond to roads and vegetation. The same process occur's in the urban image to some extent, but the lack of large, homogeneous areas in the high resolution NDVI image means the initially high D value is maintained as pixel size increases. The slope of the fractal dimension-resolution relationship provides indications of how image classification or feature identification will be affected by changes in sensor resolution.
Using fractal compression scheme to embed a digital signature into an image
NASA Astrophysics Data System (ADS)
Puate, Joan; Jordan, Frederic D.
1997-01-01
With the increase in the number of digital networks and recording devices, digital images appear to be a material, especially still images, whose ownership is widely threatened due to the availability of simple, rapid and perfect duplication and distribution means. It is in this context that several European projects are devoted to finding a technical solution which, as it applies to still images, introduces a code or watermark into the image data itself. This watermark should not only allow one to determine the owner of the image, but also respect its quality and be difficult to revoke. An additional requirement is that the code should be retrievable by the only mean of the protected information. In this paper, we propose a new scheme based on fractal coding and decoding. In general terms, a fractal coder exploits the spatial redundancy within the image by establishing a relationship between its different parts. We describe a way to use this relationship as a means of embedding a watermark. Tests have been performed in order to measure the robustness of the technique against JPEG conversion and low pass filtering. In both cases, very promising results have been obtained.
Confocal coded aperture imaging
Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.
2001-01-01
A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.
Fractal descriptors for discrimination of microscopy images of plant leaves
NASA Astrophysics Data System (ADS)
Silva, N. R.; Florindo, J. B.; Gómez, M. C.; Kolb, R. M.; Bruno, O. M.
2014-03-01
This study proposes the application of fractal descriptors method to the discrimination of microscopy images of plant leaves. Fractal descriptors have demonstrated to be a powerful discriminative method in image analysis, mainly for the discrimination of natural objects. In fact, these descriptors express the spatial arrangement of pixels inside the texture under different scales and such arrangements are directly related to physical properties inherent to the material depicted in the image. Here, we employ the Bouligand-Minkowski descriptors. These are obtained by the dilation of a surface mapping the gray-level texture. The classification of the microscopy images is performed by the well-known Support Vector Machine (SVM) method and we compare the success rate with other literature texture analysis methods. The proposed method achieved a correctness rate of 89%, while the second best solution, the Co-occurrence descriptors, yielded only 78%. This clear advantage of fractal descriptors demonstrates the potential of such approach in the analysis of the plant microscopy images.
Bingham, Philip R; Santos-Villalobos, Hector J
2011-01-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Multi-modal multi-fractal boundary encoding in object-based image compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2006-08-01
The compact representation of region boundary contours is key to efficient representation and compression of digital images using object-based compression (OBC). In OBC, regions are coded in terms of their texture, color, and shape. Given the appropriate representation scheme, high compression ratios (e.g., 500:1 <= CR <= 2,500:1) have been reported for selected images. Because a region boundary is often represented with more parameters than the region contents, it is crucial to maximize the boundary compression ratio by reducing these parameters. Researchers have elsewhere shown that cherished boundary encoding techniques such as chain coding, simplicial complexes, or quadtrees, to name but a few, are inadequate to support OBC within the aforementioned CR range. Several existing compression standards such as MPEG support efficient boundary representation, but do not necessarily support OBC at CR >= 500:1 . Siddiqui et al. exploited concepts from fractal geometry to encode and compress region boundaries based on fractal dimension, reporting CR = 286.6:1 in one test. However, Siddiqui's algorithm is costly and appears to contain ambiguities. In this paper, we first discuss fractal dimension and OBC compression ratio, then enhance Siddiqui's algorithm, achieving significantly higher CR for a wide variety of boundary types. In particular, our algorithm smoothes a region boundary B, then extracts its inflection or control points P, which are compactly represented. The fractal dimension D is computed locally for the detrended B. By appropriate subsampling, one efficiently segments disjoint clusters of D values subject to a preselected tolerance, thereby partitioning B into a multifractal. This is accomplished using four possible compression modes. In contrast, previous researchers have characterized boundary variance with one fractal dimension, thereby producing a monofractal. At its most complex, the compressed representation contains P, a spatial marker, and a D value
Advanced fractal approach for unsupervised classification of SAR images
NASA Astrophysics Data System (ADS)
Pant, Triloki; Singh, Dharmendra; Srivastava, Tanuja
2010-06-01
Unsupervised classification of Synthetic Aperture Radar (SAR) images is the alternative approach when no or minimum apriori information about the image is available. Therefore, an attempt has been made to develop an unsupervised classification scheme for SAR images based on textural information in present paper. For extraction of textural features two properties are used viz. fractal dimension D and Moran's I. Using these indices an algorithm is proposed for contextual classification of SAR images. The novelty of the algorithm is that it implements the textural information available in SAR image with the help of two texture measures viz. D and I. For estimation of D, the Two Dimensional Variation Method (2DVM) has been revised and implemented whose performance is compared with another method, i.e., Triangular Prism Surface Area Method (TPSAM). It is also necessary to check the classification accuracy for various window sizes and optimize the window size for best classification. This exercise has been carried out to know the effect of window size on classification accuracy. The algorithm is applied on four SAR images of Hardwar region, India and classification accuracy has been computed. A comparison of the proposed algorithm using both fractal dimension estimation methods with the K-Means algorithm is discussed. The maximum overall classification accuracy with K-Means comes to be 53.26% whereas overall classification accuracy with proposed algorithm is 66.16% for TPSAM and 61.26% for 2DVM.
Automatic classification for pathological prostate images based on fractal analysis.
Huang, Po-Whei; Lee, Cheng-Hsiung
2009-07-01
Accurate grading for prostatic carcinoma in pathological images is important to prognosis and treatment planning. Since human grading is always time-consuming and subjective, this paper presents a computer-aided system to automatically grade pathological images according to Gleason grading system which is the most widespread method for histological grading of prostate tissues. We proposed two feature extraction methods based on fractal dimension to analyze variations of intensity and texture complexity in regions of interest. Each image can be classified into an appropriate grade by using Bayesian, k-NN, and support vector machine (SVM) classifiers, respectively. Leave-one-out and k-fold cross-validation procedures were used to estimate the correct classification rates (CCR). Experimental results show that 91.2%, 93.7%, and 93.7% CCR can be achieved by Bayesian, k-NN, and SVM classifiers, respectively, for a set of 205 pathological prostate images. If our fractal-based feature set is optimized by the sequential floating forward selection method, the CCR can be promoted up to 94.6%, 94.2%, and 94.6%, respectively, using each of the above three classifiers. Experimental results also show that our feature set is better than the feature sets extracted from multiwavelets, Gabor filters, and gray-level co-occurrence matrix methods because it has a much smaller size and still keeps the most powerful discriminating capability in grading prostate images. PMID:19164082
Aliasing removing of hyperspectral image based on fractal structure matching
NASA Astrophysics Data System (ADS)
Wei, Ran; Zhang, Ye; Zhang, Junping
2015-05-01
Due to the richness on high frequency components, hyperspectral image (HSI) is more sensitive to distortion like aliasing. Many methods aiming at removing such distortion have been proposed. However, seldom of them are suitable to HSI, due to low spatial resolution characteristic of HSI. Fortunately, HSI contains plentiful spectral information, which can be exploited to overcome such difficulties. Motivated by this, we proposed an aliasing removing method for HSI. The major differences between proposed and current methods is that proposed algorithm is able to utilize fractal structure information, thus the dilemma originated from low-resolution of HSI is solved. Experiments on real HSI data demonstrated subjectively and objectively that proposed method can not only remove annoying visual effect brought by aliasing, but also recover more high frequency component.
Fractal analysis of scatter imaging signatures to distinguish breast pathologies
NASA Astrophysics Data System (ADS)
Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.
2013-02-01
Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.
Analysis of fractal dimensions of rat bones from film and digital images
NASA Technical Reports Server (NTRS)
Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Yamauchi, M.
2001-01-01
OBJECTIVES: (1) To compare the effect of two different intra-oral image receptors on estimates of fractal dimension; and (2) to determine the variations in fractal dimensions between the femur, tibia and humerus of the rat and between their proximal, middle and distal regions. METHODS: The left femur, tibia and humerus from 24 4-6-month-old Sprague-Dawley rats were radiographed using intra-oral film and a charge-coupled device (CCD). Films were digitized at a pixel density comparable to the CCD using a flat-bed scanner. Square regions of interest were selected from proximal, middle, and distal regions of each bone. Fractal dimensions were estimated from the slope of regression lines fitted to plots of log power against log spatial frequency. RESULTS: The fractal dimensions estimates from digitized films were significantly greater than those produced from the CCD (P=0.0008). Estimated fractal dimensions of three types of bone were not significantly different (P=0.0544); however, the three regions of bones were significantly different (P=0.0239). The fractal dimensions estimated from radiographs of the proximal and distal regions of the bones were lower than comparable estimates obtained from the middle region. CONCLUSIONS: Different types of image receptors significantly affect estimates of fractal dimension. There was no difference in the fractal dimensions of the different bones but the three regions differed significantly.
Intelligent fuzzy approach for fast fractal image compression
NASA Astrophysics Data System (ADS)
Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila
2014-12-01
Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.
Loh, N. Duane
2012-06-20
This deposition includes the aerosol diffraction images used for phasing, fractal morphology, and time-of-flight mass spectrometry. Files in this deposition are ordered in subdirectories that reflect the specifics.
Targets detection in smoke-screen image sequences using fractal and rough set theory
NASA Astrophysics Data System (ADS)
Yan, Xiaoke
2015-08-01
In this paper, a new algorithm for the detection of moving targets in smoke-screen image sequences is presented, which can combine three properties of pixel: grey, fractal dimensions and correlation between pixels by Rough Set. The first step is to locate and extract regions that may contain objects in an image by locally grey threshold technique. Secondly, the fractal dimensions of pixels are calculated, Smoke-Screen is done at different fractal dimensions. Finally, according to temporal and spatial correlations between different frames, the singular points can be filtered. The experimental results show that the algorithm can effectively increase detection probability and has robustness.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Fractal scaling of apparent soil moisture estimated from vertical planes of Vertisol pit images
NASA Astrophysics Data System (ADS)
Cumbrera, Ramiro; Tarquis, Ana M.; Gascó, Gabriel; Millán, Humberto
2012-07-01
SummaryImage analysis could be a useful tool for investigating the spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to define apparent soil moisture patterns from vertical planes of Vertisol pit images and (ii) to describe the scaling of apparent soil moisture distribution using fractal parameters. Twelve soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. Six of them were excavated in April/2011 and six pits were established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak™ digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ≈373 μm of the photographed soil pit. Each soil image was analyzed using two fractal scaling exponents, box counting (capacity) dimension (DBC) and interface fractal dimension (Di), and three prefractal scaling coefficients, the total number of boxes intercepting the foreground pattern at a unit scale (A), fractal lacunarity at the unit scale (Λ1) and Shannon entropy at the unit scale (S1). All the scaling parameters identified significant differences between both sets of spatial patterns. Fractal lacunarity was the best discriminator between apparent soil moisture patterns. Soil image interpretation with fractal exponents and prefractal coefficients can be incorporated within a site-specific agriculture toolbox. While fractal exponents convey information on space filling characteristics of the pattern, prefractal coefficients represent the investigated soil property as seen through a higher resolution microscope. In spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used in connection with traditional soil moisture sampling, which always renders punctual estimates.
High compression image and image sequence coding
NASA Technical Reports Server (NTRS)
Kunt, Murat
1989-01-01
The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.
Time Series Analysis OF SAR Image Fractal Maps: The Somma-Vesuvio Volcanic Complex Case Study
NASA Astrophysics Data System (ADS)
Pepe, Antonio; De Luca, Claudio; Di Martino, Gerardo; Iodice, Antonio; Manzo, Mariarosaria; Pepe, Susi; Riccio, Daniele; Ruello, Giuseppe; Sansosti, Eugenio; Zinno, Ivana
2016-04-01
The fractal dimension is a significant geophysical parameter describing natural surfaces representing the distribution of the roughness over different spatial scale; in case of volcanic structures, it has been related to the specific nature of materials and to the effects of active geodynamic processes. In this work, we present the analysis of the temporal behavior of the fractal dimension estimates generated from multi-pass SAR images relevant to the Somma-Vesuvio volcanic complex (South Italy). To this aim, we consider a Cosmo-SkyMed data-set of 42 stripmap images acquired from ascending orbits between October 2009 and December 2012. Starting from these images, we generate a three-dimensional stack composed by the corresponding fractal maps (ordered according to the acquisition dates), after a proper co-registration. The time-series of the pixel-by-pixel estimated fractal dimension values show that, over invariant natural areas, the fractal dimension values do not reveal significant changes; on the contrary, over urban areas, it correctly assumes values outside the natural surfaces fractality range and show strong fluctuations. As a final result of our analysis, we generate a fractal map that includes only the areas where the fractal dimension is considered reliable and stable (i.e., whose standard deviation computed over the time series is reasonably small). The so-obtained fractal dimension map is then used to identify areas that are homogeneous from a fractal viewpoint. Indeed, the analysis of this map reveals the presence of two distinctive landscape units corresponding to the Mt. Vesuvio and Gran Cono. The comparison with the (simplified) geological map clearly shows the presence in these two areas of volcanic products of different age. The presented fractal dimension map analysis demonstrates the ability to get a figure about the evolution degree of the monitored volcanic edifice and can be profitably extended in the future to other volcanic systems with
Statistical fractal border features for MRI breast mass images
NASA Astrophysics Data System (ADS)
Penn, Alan I.; Bolinger, Lizann; Loew, Murray H.
1998-06-01
MRI has been proposed as an alternative method to mammography for detecting and staging breast cancer. Recent studies have shown that architectural features of breast masses may be useful in improving specificity. Since fractal dimension (fd) has been correlated with roughness, and border roughness is an indicator of malignancy, the fd of the mass border is a promising architectural feature for achieving improved specificity. Previous methods of estimating the fd of the mass border have been unreliable because of limited data or overlay restrictive assumptions of the fractal model. We present preliminary results of a statistical approach in which a sample space of fd estimates is generated from a family of self-affine fractal models. The fd of the mass border is then estimated from the statistics of the sample space.
Edge detection and image segmentation of space scenes using fractal analyses
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Fuller, J. J.
1992-01-01
A method was developed for segmenting images of space scenes into manmade and natural components, using fractal dimensions and lacunarities. Calculations of these parameters are presented. Results are presented for a variety of aerospace images, showing that it is possible to perform edge detections of manmade objects against natural background such as those seen in an aerospace environment.
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
NASA Astrophysics Data System (ADS)
Habibi, Ali
1993-01-01
The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.
Improved triangular prism methods for fractal analysis of remotely sensed images
NASA Astrophysics Data System (ADS)
Zhou, Yu; Fung, Tung; Leung, Yee
2016-05-01
Feature extraction has been a major area of research in remote sensing, and fractal feature is a natural characterization of complex objects across scales. Extending on the modified triangular prism (MTP) method, we systematically discuss three factors closely related to the estimation of fractal dimensions of remotely sensed images. They are namely the (F1) number of steps, (F2) step size, and (F3) estimation accuracy of the facets' areas of the triangular prisms. Differing from the existing improved algorithms that separately consider these factors, we simultaneously take all factors to construct three new algorithms, namely the modification of the eight-pixel algorithm, the four corner and the moving-average MTP. Numerical experiments based on 4000 generated images show their superior performances over existing algorithms: our algorithms not only overcome the limitation of image size suffered by existing algorithms but also obtain similar average fractal dimension with smaller standard deviation, only 50% for images with high fractal dimensions. In the case of real-life application, our algorithms more likely obtain fractal dimensions within the theoretical range. Thus, the fractal nature uncovered by our algorithms is more reasonable in quantifying the complexity of remotely sensed images. Despite the similar performance of these three new algorithms, the moving-average MTP can mitigate the sensitivity of the MTP to noise and extreme values. Based on the numerical and real-life case study, we check the effect of the three factors, (F1)-(F3), and demonstrate that these three factors can be simultaneously considered for improving the performance of the MTP method.
ERIC Educational Resources Information Center
Barton, Ray
1990-01-01
Presented is an educational game called "The Chaos Game" which produces complicated fractal images. Two basic computer programs are included. The production of fractal images by the Sierpinski gasket and the Chaos Game programs is discussed. (CW)
Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images
de Castro, J.; Méndez, A.; Tarquis, A. M.
2014-01-01
The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957
Fractal analysis of laplacian pyramidal filters applied to segmentation of soil images.
de Castro, J; Ballesteros, F; Méndez, A; Tarquis, A M
2014-01-01
The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957
Automatic Method to Classify Images Based on Multiscale Fractal Descriptors and Paraconsistent Logic
NASA Astrophysics Data System (ADS)
Pavarino, E.; Neves, L. A.; Nascimento, M. Z.; Godoy, M. F.; Arruda, P. F.; Neto, D. S.
2015-01-01
In this study is presented an automatic method to classify images from fractal descriptors as decision rules, such as multiscale fractal dimension and lacunarity. The proposed methodology was divided in three steps: quantification of the regions of interest with fractal dimension and lacunarity, techniques under a multiscale approach; definition of reference patterns, which are the limits of each studied group; and, classification of each group, considering the combination of the reference patterns with signals maximization (an approach commonly considered in paraconsistent logic). The proposed method was used to classify histological prostatic images, aiming the diagnostic of prostate cancer. The accuracy levels were important, overcoming those obtained with Support Vector Machine (SVM) and Best- first Decicion Tree (BFTree) classifiers. The proposed approach allows recognize and classify patterns, offering the advantage of giving comprehensive results to the specialists.
Local intensity adaptive image coding
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1989-01-01
The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.
NASA Technical Reports Server (NTRS)
Huang, J.; Turcotte, D. L.
1989-01-01
The concept of fractal mapping is introduced and applied to digitized topography of Arizona. It is shown that the fractal statistics satisfy the topography of the state to a good approximation. The fractal dimensions and roughness amplitudes from subregions are used to construct maps of these quantities. It is found that the fractal dimension of actual two-dimensional topography is not affected by the adding unity to the fractal dimension of one-dimensional topographic tracks. In addition, consideration is given to the production of fractal maps from synthetically derived topography.
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe
2002-05-01
Fractal geometry concerns the study of non-Euclidean geometrical figures generated by a recursive sequence of mathematical operations. These figures show self-similar features in the sense that their shape, at a certain scale, is equal to, or at least 'similar' to, the same shape of the figure at a different scale or resolution. This property of scale invariance often occurs also in some natural events. The basis of the method is the very comparison of the space covering of one of those geometrical constructions, the 'Sierpinski Carpet,' and the particles location and size distribution in the portion of surface acquired in the images. Fractal analysis method, consists in the study of the size distribution structure and disposition modalities. Such an approach was presented in this paper with reference to the characterization of airborne dust produced in working environment through the evaluation of particle size disposition and distribution over a surface. Such a behavior, in fact, was assumed to be strictly correlated with 1) material surface physical characteristics and 2) on the modalities by which the material is 'shot' on the dust sample holder (glass support). To get this goal, a 2D-Fractal extractor has been used, calibrated to different area thresholding values, as a result binary sample image changes, originating different fractal resolutions plotted for the residual background area (Richardson plot). Changing the lower size thresholding value of the 2D-Fractal extractor algorithm, means to change the starting point of fractal measurements; in such way, it has been looked for possible differences of powders in the lower dimensional classes. The rate by which the lowest part of the plot goes down to residual area equal to zero, together with fractal dimensions (low and high, depending on average material curve) and their precision (R2) of 'zero curve' (Richardson Plot with area thresholding value equal to zero, i.e. whole fractal distribution), can be used
Saremi, Saeed; Sejnowski, Terrence J
2016-05-01
Natural images are scale invariant with structures at all length scales.We formulated a geometric view of scale invariance in natural images using percolation theory, which describes the behavior of connected clusters on graphs.We map images to the percolation model by defining clusters on a binary representation for images. We show that critical percolating structures emerge in natural images and study their scaling properties by identifying fractal dimensions and exponents for the scale-invariant distributions of clusters. This formulation leads to a method for identifying clusters in images from underlying structures as a starting point for image segmentation. PMID:26415153
Saremi, Saeed; Sejnowski, Terrence J.
2016-01-01
Natural images are scale invariant with structures at all length scales. We formulated a geometric view of scale invariance in natural images using percolation theory, which describes the behavior of connected clusters on graphs. We map images to the percolation model by defining clusters on a binary representation for images. We show that critical percolating structures emerge in natural images and study their scaling properties by identifying fractal dimensions and exponents for the scale-invariant distributions of clusters. This formulation leads to a method for identifying clusters in images from underlying structures as a starting point for image segmentation. PMID:26415153
Adaptive entropy coded subband coding of images.
Kim, Y H; Modestino, J W
1992-01-01
The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138
Redundancy reduction in image coding
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur; Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.
1993-01-01
We assess redundancy reduction in image coding in terms of the information acquired by the image-gathering process and the amount of data required to convey this information. A clear distinction is made between the theoretically minimum rate of data transmission, as measured by the entropy of the completely decorrelated data, and the actual rate of data transmission, as measured by the entropy of the encoded (incompletely decorrelated) data. It is shown that the information efficiency of the visual communication channel depends not only on the characteristics of the radiance field and the decorrelation algorithm, as is generally perceived, but also on the design of the image-gathering device, as is commonly ignored.
Fractal methods for extracting artificial objects from the unmanned aerial vehicle images
NASA Astrophysics Data System (ADS)
Markov, Eugene
2016-04-01
Unmanned aerial vehicles (UAVs) have become used increasingly in earth surface observations, with a special interest put into automatic modes of environmental control and recognition of artificial objects. Fractal methods for image processing well detect the artificial objects in digital space images but were not applied previously to the UAV-produced imagery. Parameters of photography, on-board equipment, and image characteristics differ considerably for spacecrafts and UAVs. Therefore, methods that work properly with space images can produce different results for the UAVs. In this regard, testing the applicability of fractal methods for the UAV-produced images and determining the optimal range of parameters for these methods represent great interest. This research is dedicated to the solution of this problem. Specific features of the earth's surface images produced with UAVs are described in the context of their interpretation and recognition. Fractal image processing methods for extracting artificial objects are described. The results of applying these methods to the UAV images are presented.
Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors
da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez
2015-01-01
The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification. PMID:26091501
Reconstruction of coded aperture images
NASA Technical Reports Server (NTRS)
Bielefeld, Michael J.; Yin, Lo I.
1987-01-01
Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.
Alzheimer's Disease Detection in Brain Magnetic Resonance Images Using Multiscale Fractal Analysis
Lahmiri, Salim; Boukadoum, Mounir
2013-01-01
We present a new automated system for the detection of brain magnetic resonance images (MRI) affected by Alzheimer's disease (AD). The MRI is analyzed by means of multiscale analysis (MSA) to obtain its fractals at six different scales. The extracted fractals are used as features to differentiate healthy brain MRI from those of AD by a support vector machine (SVM) classifier. The result of classifying 93 brain MRIs consisting of 51 images of healthy brains and 42 of brains affected by AD, using leave-one-out cross-validation method, yielded 99.18% ± 0.01 classification accuracy, 100% sensitivity, and 98.20% ± 0.02 specificity. These results and a processing time of 5.64 seconds indicate that the proposed approach may be an efficient diagnostic aid for radiologists in the screening for AD. PMID:24967286
NASA Astrophysics Data System (ADS)
Ahammer, Helmut; DeVaney, Trevor T. J.
2004-03-01
The boundary of a fractal object, represented in a two-dimensional space, is theoretically a line with an infinitely small width. In digital images this boundary or contour is limited to the pixel resolution of the image and the width of the line commonly depends on the edge detection algorithm used. The Minkowski dimension was evaluated by using three different edge detection algorithms (Sobel, Roberts, and Laplace operator). These three operators were investigated because they are very widely used and because their edge detection result is very distinct concerning the line width. Very common fractals (Sierpinski carpet and Koch islands) were investigated as well as the binary images from a cancer invasion assay taken with a confocal laser scanning microscope. The fractal dimension is directly proportional to the width of the contour line and the fact, that in practice very often the investigated objects are fractals only within a limited resolution range is considered too.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.
2016-10-01
Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the
Fractal analyses of osseous healing using Tuned Aperture Computed Tomography images
Seyedain, Ali; Webber, Richard L.; Nair, Umadevi P.; Piesco, Nicholas P.; Agarwal, Sudha; Mooney, Mark P.; Gröndahl, Hans-Göran
2016-01-01
The aim of this study was to evaluate osseous healing in mandibular defects using fractal analyses on conventional radiographs and tuned aperture computed tomography (TACT; OrthoTACT, Instrumentarium Imaging, Helsinki, Finland) images. Eighty test sites on the inferior margins of rabbit mandibles were subject to lesion induction and treated with one of the following: no treatment (controls); osteoblasts only; polymer matrix only; or osteoblast-polymer matrix (OPM) combination. Images were acquired using conventional radiography and TACT, including unprocessed TACT (TACT-U) and iteratively restored TACT (TACT-IR). Healing was followed up over time and images acquired at 3, 6, 9, and 12 weeks post-surgery. Fractal dimension (FD) was computed within regions of interest in the defects using the TACT workbench. Results were analyzed for effects produced by imaging modality, treatment modality, time after surgery and lesion location. Histomorphometric data were available to assess ground truth. Significant differences (p < 0.0001) were noted based on imaging modality with TACT-IR recording the highest mean fractal dimension (MFD), followed by TACT-U and conventional images, in that order. Sites treated with OPM recorded the highest MFDs among all treatment modalities (p < 0.0001). The highest MFD based on time was recorded at 3 weeks and differed significantly with 12 weeks (p < 0.035). Correlation of FD with results of histomorphometric data was high (r = 0.79; p < 0.001). The FD computed on TACT-IR showed the highest correlation with histomorphometric data, thus establishing the fact TACT is a more efficient and accurate imaging modality for quantification of osseous changes within healing bony defects. PMID:11519567
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Fractal evaluation of drug amorphicity from optical and scanning electron microscope images
NASA Astrophysics Data System (ADS)
Gavriloaia, Bogdan-Mihai G.; Vizireanu, Radu C.; Neamtu, Catalin I.; Gavriloaia, Gheorghe V.
2013-09-01
Amorphous materials are metastable, more reactive than the crystalline ones, and have to be evaluated before pharmaceutical compound formulation. Amorphicity is interpreted as a spatial chaos, and patterns of molecular aggregates of dexamethasone, D, were investigated in this paper by using fractal dimension, FD. Images having three magnifications of D were taken from an optical microscope, OM, and with eight magnifications, from a scanning electron microscope, SEM, were analyzed. The average FD for pattern irregularities of OM images was 1.538, and about 1.692 for SEM images. The FDs of the two kinds of images are less sensitive of threshold level. 3D images were shown to illustrate dependence of FD of threshold and magnification level. As a result, optical image of single scale is enough to characterize the drug amorphicity. As a result, the OM image at a single scale is enough to characterize the amorphicity of D.
Subband coding for image data archiving
NASA Technical Reports Server (NTRS)
Glover, Daniel; Kwatra, S. C.
1993-01-01
The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.
Subband coding for image data archiving
NASA Technical Reports Server (NTRS)
Glover, D.; Kwatra, S. C.
1992-01-01
The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.
Zone specific fractal dimension of retinal images as predictor of stroke incidence.
Aliahmad, Behzad; Kumar, Dinesh Kant; Hao, Hao; Unnikrishnan, Premith; Che Azemin, Mohd Zulfaezal; Kawasaki, Ryo; Mitchell, Paul
2014-01-01
Fractal dimensions (FDs) are frequently used for summarizing the complexity of retinal vascular. However, previous techniques on this topic were not zone specific. A new methodology to measure FD of a specific zone in retinal images has been developed and tested as a marker for stroke prediction. Higuchi's fractal dimension was measured in circumferential direction (FDC) with respect to optic disk (OD), in three concentric regions between OD boundary and 1.5 OD diameter from its margin. The significance of its association with future episode of stroke event was tested using the Blue Mountain Eye Study (BMES) database and compared against spectrum fractal dimension (SFD) and box-counting (BC) dimension. Kruskal-Wallis analysis revealed FDC as a better predictor of stroke (H = 5.80, P = 0.016, α = 0.05) compared with SFD (H = 0.51, P = 0.475, α = 0.05) and BC (H = 0.41, P = 0.520, α = 0.05) with overall lower median value for the cases compared to the control group. This work has shown that there is a significant association between zone specific FDC of eye fundus images with future episode of stroke while this difference is not significant when other FD methods are employed. PMID:25485298
Zone Specific Fractal Dimension of Retinal Images as Predictor of Stroke Incidence
Kumar, Dinesh Kant; Hao, Hao; Unnikrishnan, Premith; Kawasaki, Ryo; Mitchell, Paul
2014-01-01
Fractal dimensions (FDs) are frequently used for summarizing the complexity of retinal vascular. However, previous techniques on this topic were not zone specific. A new methodology to measure FD of a specific zone in retinal images has been developed and tested as a marker for stroke prediction. Higuchi's fractal dimension was measured in circumferential direction (FDC) with respect to optic disk (OD), in three concentric regions between OD boundary and 1.5 OD diameter from its margin. The significance of its association with future episode of stroke event was tested using the Blue Mountain Eye Study (BMES) database and compared against spectrum fractal dimension (SFD) and box-counting (BC) dimension. Kruskal-Wallis analysis revealed FDC as a better predictor of stroke (H = 5.80, P = 0.016, α = 0.05) compared with SFD (H = 0.51, P = 0.475, α = 0.05) and BC (H = 0.41, P = 0.520, α = 0.05) with overall lower median value for the cases compared to the control group. This work has shown that there is a significant association between zone specific FDC of eye fundus images with future episode of stroke while this difference is not significant when other FD methods are employed. PMID:25485298
Discrete Cosine Transform Image Coding With Sliding Block Codes
NASA Astrophysics Data System (ADS)
Divakaran, Ajay; Pearlman, William A.
1989-11-01
A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.
Compressed image transmission based on fountain codes
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Wu, Xinhong; Jiao, L. C.
2011-11-01
In this paper, we propose a joint source-channel coding (JSCC) scheme for image transmission over wireless channel. In the scheme, fountain codes are integrated into bit-plane coding for channel coding. Compared to traditional erasure codes for error correcting, such as Reed-Solomon codes, fountain codes are rateless and can generate sufficient symbols on the fly. Two schemes, the EEP (Equal Error Protection) scheme and the UEP (Unequal Error Protection) scheme are described in the paper. Furthermore, the UEP scheme performs better than the EEP scheme. The proposed scheme not only can adaptively adjust the length of fountain codes according to channel loss rate but also reconstruct image even on bad channel.
Landmine detection using IR image segmentation by means of fractal dimension analysis
NASA Astrophysics Data System (ADS)
Abbate, Horacio A.; Gambini, Juliana; Delrieux, Claudio; Castro, Eduardo H.
2009-05-01
This work is concerned with buried landmines detection by long wave infrared images obtained during the heating or cooling of the soil and a segmentation process of the images. The segmentation process is performed by means of a local fractal dimension analysis (LFD) as a feature descriptor. We use two different LFD estimators, box-counting dimension (BC), and differential box counting dimension (DBC). These features are computed in a per pixel basis, and the set of features is clusterized by means of the K-means method. This segmentation technique produces outstanding results, with low computational cost.
Badea, Alexandru Florin; Lupsor Platon, Monica; Crisan, Maria; Cattani, Carlo; Badea, Iulia; Pierro, Gaetano; Sannino, Gianpaolo; Baciut, Grigore
2013-01-01
The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD). It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of "real-time" elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology. PMID:23762183
Zaia, Annamaria
2015-01-01
Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest
Advanced Imaging Optics Utilizing Wavefront Coding.
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.
An edge preserving differential image coding scheme
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1992-01-01
Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.
An edge preserving differential image coding scheme
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility especially when coding medical or scientific images, where edge preservation is of utmost importance. We present a simple, easy to implement differential image coding system with excellent edge preservation properties. The coding system can be used over variable rate channels which makes it especially attractive for use in the packet network environment.
Bitplane Image Coding With Parallel Coefficient Processing.
Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor
2016-01-01
Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible. PMID:26441420
Coding and transmission of subband coded images on the Internet
NASA Astrophysics Data System (ADS)
Wah, Benjamin W.; Su, Xiao
2001-09-01
Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sabyasachi; Das, Nandan K.; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2014-02-01
The objective of the present work is to diagnose pre-cancer by wavelet transform and multi-fractal de-trended fluctuation analysis of DIC images of normal and different grades of cancer tissues. Our DIC imaging and fluctuation analysis methods (Discrete and continuous wavelet transform, MFDFA) confirm the ability to diagnose and detect the early stage of cancer in cervical tissue.
Azevedo-Marques, P M; Spagnoli, H F; Frighetto-Pereira, L; Menezes-Reis, R; Metzner, G A; Rangayyan, R M; Nogueira-Barbosa, M H
2015-08-01
Fractures with partial collapse of vertebral bodies are generically referred to as "vertebral compression fractures" or VCFs. VCFs can have different etiologies comprising trauma, bone failure related to osteoporosis, or metastatic cancer affecting bone. VCFs related to osteoporosis (benign fractures) and to cancer (malignant fractures) are commonly found in the elderly population. In the clinical setting, the differentiation between benign and malignant fractures is complex and difficult. This paper presents a study aimed at developing a system for computer-aided diagnosis to help in the differentiation between malignant and benign VCFs in magnetic resonance imaging (MRI). We used T1-weighted MRI of the lumbar spine in the sagittal plane. Images from 47 consecutive patients (31 women, 16 men, mean age 63 years) were studied, including 19 malignant fractures and 54 benign fractures. Spectral and fractal features were extracted from manually segmented images of 73 vertebral bodies with VCFs. The classification of malignant vs. benign VCFs was performed using the k-nearest neighbor classifier with the Euclidean distance. Results obtained show that combinations of features derived from Fourier and wavelet transforms, together with the fractal dimension, were able to obtain correct classification rate up to 94.7% with area under the receiver operating characteristic curve up to 0.95. PMID:26736364
Image content authentication based on channel coding
NASA Astrophysics Data System (ADS)
Zhang, Fan; Xu, Lei
2008-03-01
The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.
NASA Astrophysics Data System (ADS)
Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis
2010-11-01
Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in
Evaluation of super-resolution imager with binary fractal test target
NASA Astrophysics Data System (ADS)
Landeau, Stéphane
2014-10-01
Today, new generation of powerful non-linear image processing are used for real time super-resolution or noise reduction. Optronic imagers with such features are becoming difficult to assess, because spatial resolution and sensitivity are now related to scene content. Many algorithms include regularization process, which usually reduces image complexity to enhance spread edges or contours. Small important scene details can be then deleted by this kind of processing. In this paper, a binary fractal test target is presented, with a structured clutter pattern and an interesting autosimilarity multi-scale property. The apparent structured clutter of this test target gives a trade-off between a white noise, unlikely in real scenes, and very structured targets like MTF targets. Together with the fractal design of the target, an assessment method has been developed to evaluate automatically the non-linear effects on the acquired and processed image of the imager. The calculated figure of merit is to be directly comparable to the linear Fourier MTF. For this purpose the Haar wavelet elements distributed spatially and at different scales on the target are assimilated to the sine Fourier cycles at different frequencies. The probability of correct resolution indicates the ability to read correct Haar contrast among all Haar wavelet elements with a position constraint. For the method validation, a simulation of two different imager types has been done, a well-sampled linear system and an under-sampled one, coupled with super-resolution or noise reduction algorithms. The influence of the target contrast on the figures of merit is analyzed. Finally, the possible introduction of this new figure of merit in existing analytical range performance models, such as TRM4 (Fraunhofer IOSB) or NVIPM (NVESD) is discussed. Benefits and limitations of the method are also compared to the TOD (TNO) evaluation method.
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.
1997-01-01
The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.
NASA Astrophysics Data System (ADS)
Popescu, Dan P.; Flueraru, Costel; Mao, Youxin; Chang, Shoude; Sowa, Michael G.
2010-02-01
Two methods for analyzing OCT images of arterial tissues are tested. These methods are applied toward two types of samples: segments of arteries collected from atherosclerosis-prone Watanabe heritable hyper-lipidemic rabbits and pieces of porcine left descending coronary arteries without atherosclerosis. The first method is based on finding the attenuation coefficients for the OCT signal that propagates through various regions of the tissue. The second method involves calculating the fractal dimensions of the OCT signal textures in the regions of interest identified within the acquired images. A box-counting algorithm is used for calculating the fractal dimensions. Both parameters, the attenuation coefficient as well as the fractal dimension correlate very well with the anatomical features of both types of samples.
Medical image retrieval and analysis by Markov random fields and multi-scale fractal dimension.
Backes, André Ricardo; Gerhardinger, Leandro Cavaleri; Batista Neto, João do Espírito Santo; Bruno, Odemir Martinez
2015-02-01
Many Content-based Image Retrieval (CBIR) systems and image analysis tools employ color, shape and texture (in a combined fashion or not) as attributes, or signatures, to retrieve images from databases or to perform image analysis in general. Among these attributes, texture has turned out to be the most relevant, as it allows the identification of a larger number of images of a different nature. This paper introduces a novel signature which can be used for image analysis and retrieval. It combines texture with complexity extracted from objects within the images. The approach consists of a texture segmentation step, modeled as a Markov Random Field process, followed by the estimation of the complexity of each computed region. The complexity is given by a Multi-scale Fractal Dimension. Experiments have been conducted using an MRI database in both pattern recognition and image retrieval contexts. The results show the accuracy of the proposed method in comparison with other traditional texture descriptors and also indicate how the performance changes as the level of complexity is altered. PMID:25586375
Medical image retrieval and analysis by Markov random fields and multi-scale fractal dimension
NASA Astrophysics Data System (ADS)
Backes, André Ricardo; Cavaleri Gerhardinger, Leandro; do Espírito Santo Batista Neto, João; Martinez Bruno, Odemir
2015-02-01
Many Content-based Image Retrieval (CBIR) systems and image analysis tools employ color, shape and texture (in a combined fashion or not) as attributes, or signatures, to retrieve images from databases or to perform image analysis in general. Among these attributes, texture has turned out to be the most relevant, as it allows the identification of a larger number of images of a different nature. This paper introduces a novel signature which can be used for image analysis and retrieval. It combines texture with complexity extracted from objects within the images. The approach consists of a texture segmentation step, modeled as a Markov Random Field process, followed by the estimation of the complexity of each computed region. The complexity is given by a Multi-scale Fractal Dimension. Experiments have been conducted using an MRI database in both pattern recognition and image retrieval contexts. The results show the accuracy of the proposed method in comparison with other traditional texture descriptors and also indicate how the performance changes as the level of complexity is altered.
NASA Astrophysics Data System (ADS)
Yang, Y.; Li, H. T.; Han, Y. S.; Gu, H. Y.
2015-06-01
Image segmentation is the foundation of further object-oriented image analysis, understanding and recognition. It is one of the key technologies in high resolution remote sensing applications. In this paper, a new fast image segmentation algorithm for high resolution remote sensing imagery is proposed, which is based on graph theory and fractal net evolution approach (FNEA). Firstly, an image is modelled as a weighted undirected graph, where nodes correspond to pixels, and edges connect adjacent pixels. An initial object layer can be obtained efficiently from graph-based segmentation, which runs in time nearly linear in the number of image pixels. Then FNEA starts with the initial object layer and a pairwise merge of its neighbour object with the aim to minimize the resulting summed heterogeneity. Furthermore, according to the character of different features in high resolution remote sensing image, three different merging criterions for image objects based on spectral and spatial information are adopted. Finally, compared with the commercial remote sensing software eCognition, the experimental results demonstrate that the efficiency of the algorithm has significantly improved, and the result can maintain good feature boundaries.
Mossotti, Victor G.; Eldeeb, A. Raouf
2000-01-01
Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.
Computing Challenges in Coded Mask Imaging
NASA Technical Reports Server (NTRS)
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
ERIC Educational Resources Information Center
Esbenshade, Donald H., Jr.
1991-01-01
Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)
Coded Access Optical Sensor (CAOS) Imager
NASA Astrophysics Data System (ADS)
Riza, N. A.; Amin, M. J.; La Torre, J. P.
2015-04-01
High spatial resolution, low inter-pixel crosstalk, high signal-to-noise ratio (SNR), adequate application dependent speed, economical and energy efficient design are common goals sought after for optical image sensors. In optical microscopy, overcoming the diffraction limit in spatial resolution has been achieved using materials chemistry, optimal wavelengths, precision optics and nanomotion-mechanics for pixel-by-pixel scanning. Imagers based on pixelated imaging devices such as CCD/CMOS sensors avoid pixel-by-pixel scanning as all sensor pixels operate in parallel, but these imagers are fundamentally limited by inter-pixel crosstalk, in particular with interspersed bright and dim light zones. In this paper, we propose an agile pixel imager sensor design platform called Coded Access Optical Sensor (CAOS) that can greatly alleviate the mentioned fundamental limitations, empowering smart optical imaging for particular environments. Specifically, this novel CAOS imager engages an application dependent electronically programmable agile pixel platform using hybrid space-time-frequency coded multiple-access of the sampled optical irradiance map. We demonstrate the foundational working principles of the first experimental electronically programmable CAOS imager using hybrid time-frequency multiple access sampling of a known high contrast laser beam irradiance test map, with the CAOS instrument based on a Texas Instruments (TI) Digital Micromirror Device (DMD). This CAOS instrument provides imaging data that exhibits 77 dB electrical SNR and the measured laser beam image irradiance specifications closely match (i.e., within 0.75% error) the laser manufacturer provided beam image irradiance radius numbers. The proposed CAOS imager can be deployed in many scientific and non-scientific applications where pixel agility via electronic programmability can pull out desired features in an irradiance map subject to the CAOS imaging operation.
Image coding compression based on DCT
NASA Astrophysics Data System (ADS)
Feng, Fei; Liu, Peixue; Jiang, Baohua
2012-04-01
With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression. PMID:23589187
Coded-aperture imaging in nuclear medicine
NASA Technical Reports Server (NTRS)
Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.
1989-01-01
Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.
Transform coding of stereo image residuals.
Moellenhoff, M S; Maier, M W
1998-01-01
Stereo image compression is of growing interest because of new display technologies and the needs of telepresence systems. Compared to monoscopic image compression, stereo image compression has received much less attention. A variety of algorithms have appeared in the literature that make use of the cross-view redundancy in the stereo pair. Many of these use the framework of disparity-compensated residual coding, but concentrate on the disparity compensation process rather than the post compensation coding process. This paper studies specialized coding methods for the residual image produced by disparity compensation. The algorithms make use of theoretically expected and experimentally observed characteristics of the disparity-compensated stereo residual to select transforms and quantization methods. Performance is evaluated on mean squared error (MSE) and a stereo-unique metric based on image registration. Exploiting the directional characteristics in a discrete cosine transform (DCT) framework provides its best performance below 0.75 b/pixel for 8-b gray-scale imagery and below 2 b/pixel for 24-b color imagery, In the wavelet algorithm, roughly a 50% reduction in bit rate is possible by encoding only the vertical channel, where much of the stereo information is contained. The proposed algorithms do not incur substantial computational burden beyond that needed for any disparity-compensated residual algorithm. PMID:18276294
Coded-aperture imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.
1989-11-01
Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.
A Germanium-Based, Coded Aperture Imager
Ziock, K P; Madden, N; Hull, E; William, C; Lavietes, T; Cork, C
2001-10-31
We describe a coded-aperture based, gamma-ray imager that uses a unique hybrid germanium detector system. A planar, germanium strip detector, eleven millimeters thick is followed by a coaxial detector. The 19 x 19 strip detector (2 mm pitch) is used to determine the location and energy of low energy events. The location of high energy events are determined from the location of the Compton scatter in the planar detector and the energy is determined from the sum of the coaxial and planar energies. With this geometry, we obtain useful quantum efficiency in a position-sensitive mode out to 500 keV. The detector is used with a 19 x 17 URA coded aperture to obtain spectrally resolved images in the gamma-ray band. We discuss the performance of the planar detector, the hybrid system and present images taken of laboratory sources.
Finding maximum JPEG image block code size
NASA Astrophysics Data System (ADS)
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
NASA Astrophysics Data System (ADS)
Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.
2012-01-01
For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.
NASA Astrophysics Data System (ADS)
Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.
2011-09-01
For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.
Scalable image coding for interactive image communication over networks
NASA Astrophysics Data System (ADS)
Yoon, Sung H.; Lee, Ji H.; Alexander, Winser E.
2000-12-01
This paper presents a new, scalable coding technique that can be used in interactive image/video communications over the Internet. The proposed technique generates a fully embedded bit stream that provides scalability with high quality for the whole image and it can be used to implement region based coding as well. The embedded bit stream is comprised of a basic layer and many enhancement layers. The enhancement layers add refinement to the quality of the image that has been reconstructed using the basic layer. The proposed coding technique uses multiple quantizers with thresholds (QT) for layering and it creates a bit plane for each layer. The bit plane is then partitioned into sets of small areas to be coded independently. Run length and entropy coding are applied to each of the sets to provide scalability for the entire image resulting in high picture quality in the user-specific area of interest (ROI). We tested this technique by applying it to various test images and the results consistently show high level of performance.
Coded source imaging simulation with visible light
NASA Astrophysics Data System (ADS)
Wang, Sheng; Zou, Yubin; Zhang, Xueshuang; Lu, Yuanrong; Guo, Zhiyu
2011-09-01
A coded source could increase the neutron flux with high L/ D ratio. It may benefit a neutron imaging system with low yield neutron source. Visible light CSI experiments were carried out to test the physical design and reconstruction algorithm. We used a non-mosaic Modified Uniformly Redundant Array (MURA) mask to project the shadow of black/white samples on a screen. A cooled-CCD camera was used to record the image on the screen. Different mask sizes and amplification factors were tested. The correlation, Wiener filter deconvolution and Richardson-Lucy maximum likelihood iteration algorithm were employed to reconstruct the object imaging from the original projection. The results show that CSI can benefit the low flux neutron imaging with high background noise.
Complementary lattice arrays for coded aperture imaging
NASA Astrophysics Data System (ADS)
Ding, Jie; Noshad, Mohammad; Tarokh, Vahid
2016-05-01
In this work, we consider complementary lattice arrays in order to enable a broader range of designs for coded aperture imaging systems. We provide a general framework and methods that generate richer and more flexible designs than existing ones. Besides this, we review and interpret the state-of-the-art uniformly redundant arrays (URA) designs, broaden the related concepts, and further propose some new design methods.
NASA Astrophysics Data System (ADS)
Jia, Peng; Cai, Dongmei; Wang, Dong
2014-11-01
A parallel blind deconvolution algorithm is presented. The algorithm contains the constraints of the point spread function (PSF) derived from the physical process of the imaging. Additionally, in order to obtain an effective restored image, the fractal energy ratio is used as an evaluation criterion to estimate the quality of the image. This algorithm is fine-grained parallelized to increase the calculation speed. Results of numerical experiments and real experiments indicate that this algorithm is effective.
A formal link of anticipatory mental imaging with fractal features of biological time
NASA Astrophysics Data System (ADS)
Bounias, Michel; Bonaly, André
2001-06-01
Previous works have supported the proposition that biological organisms are endowed with perceptive functions based of fixed points in mental chaining sequences (Bounias and Bonaly, 1997). Former conjectures proposed that memory could be fractal (Dubois, 1990; Bonaly, 1989, 1994) and that the biological time, standing at the intersection of the arrows of past events and of future events, exhibit some similarity with the construction of a Koch-like structure (Bonaly, 2000). A formal examination of the biological system of perception let now appear that the perception of time occurs at the intersection of two consecutive fixed point sequences. Therefore, time-flow is mapped by sequences of fixed points of which each is the convergence value of sequences of neuronal configurations. Since the latter are indexed by the ordered sequences of closed Poincaré sections determining the physical arrow of time (Bonaly and Bounias, 1995), there exists a surjective Lipschitz-Hölder mapping of physical time onto system-perceived time. The succession of consecutive fixed points of the perceptive sequence in turn constitute a sequence whose properties account for the apparent continuity of time-perception, in the same time as they fulfill the basically nonlinearity of time as a general parameter. A generator polygon is shown to be constituted by four sides: (i) the interval between two consecutive bursts of perceptions provides the base, with a projection paralleling the arrow of physical time; (ii) the top is constituted by the sequence of repeated fixed points accounting for the mental image got from the first burst, and further maintained up to the formation of the next fixed point; (iii) the first lateral side is the difference (Lk-1 to Lk) between the measure (L) of neuronal chains leading to the first image a(uk), and: (iv) the second lateral side is the difference of measure between Lk and Lk+1. The equation of the system is therefore of the incursive type, and this
NASA Astrophysics Data System (ADS)
Xie, Xufen; Wang, Hongyuan; Zhang, Wei
2015-11-01
This paper deals with the estimation of an in-orbit modulation transfer function (MTF) by a remote sensing image sequence, which is often difficult to measure because of a lack of suitable target images. A model is constructed which combines a fractal Brownian motion model that describes natural images stochastic fractal characteristics, with an inverse Fourier transform of an ideal remote sensing image amplitude spectrum. The model is used to decouple the blurring effect and an ideal natural image. Then, a model of MTF statistical estimation is built by standard deviation of the image sequence amplitude spectrum. Furthermore, model parameters are estimated by the ergodicity assumption of a remote sensing image sequence. Finally, the results of the statistical MTF estimation method are given and verified. The experimental results demonstrate that the method is practical and effective, and the relative deviation at the Nyquist frequency between the edge method and the method in this paper is less than 5.74%. The MTF estimation method is applicable for remote sensing image sequences and is not restricted by the characteristic target of images.
Fast-neutron, coded-aperture imager
NASA Astrophysics Data System (ADS)
Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.
2015-06-01
This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led
NASA Astrophysics Data System (ADS)
Feng, Bin; Zhang, Chengshuo; Xu, Baoshu; Shi, Zelin
2015-11-01
Artefacts and noise degrade the decoded image of a wavefront coding infrared imaging system, which usually results in the decoded image being inferior to the in-focus infrared image of a conventional infrared imaging system. The previous letter showed that the decoded image fell behind the in-focus infrared image. For comparison, a bar target experiment at temperature of 20°C and two groups of outdoor experiments at temperatures of 28°C and 70°C are respectively conducted. Experimental results prove that a wavefront coding infrared imaging system can achieve the decoded image being approximating to its corresponding in-focus infrared image.
Dual-sided coded-aperture imager
Ziock, Klaus-Peter
2009-09-22
In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.
Modulated lapped transforms in image coding
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Rao, K. R.
1994-05-01
The class of modulated lapped transforms (MLT) with extended overlap is investigated in image coding. The finite-length-signals implementation using symmetric extensions is introduced and human visual sensitivity arrays are computed. Theoretical comparisons with other popular transforms are carried and simulations are made using intraframe coders. Emphasis is given in transmission over packet networks assuming high rate of data losses. The MLT with overlap factor 2 is shown to be superior in all our tests with bonus features such as greater robustness against block losses.
NASA Astrophysics Data System (ADS)
Mosquera Lopez, Clara; Agaian, Sos
2013-02-01
Prostate cancer detection and staging is an important step towards patient treatment selection. Advancements in digital pathology allow the application of new quantitative image analysis algorithms for computer-assisted diagnosis (CAD) on digitized histopathology images. In this paper, we introduce a new set of features to automatically grade pathological images using the well-known Gleason grading system. The goal of this study is to classify biopsy images belonging to Gleason patterns 3, 4, and 5 by using a combination of wavelet and fractal features. For image classification we use pairwise coupling Support Vector Machine (SVM) classifiers. The accuracy of the system, which is close to 97%, is estimated through three different cross-validation schemes. The proposed system offers the potential for automating classification of histological images and supporting prostate cancer diagnosis.
FRACTAL DIMENSION OF GALAXY ISOPHOTES
Thanki, Sandip; Rhee, George; Lepp, Stephen E-mail: grhee@physics.unlv.edu
2009-09-15
In this paper we investigate the use of the fractal dimension of galaxy isophotes in galaxy classification. We have applied two different methods for determining fractal dimensions to the isophotes of elliptical and spiral galaxies derived from CCD images. We conclude that fractal dimension alone is not a reliable tool but that combined with other parameters in a neural net algorithm the fractal dimension could be of use. In particular, we have used three parameters to segregate the ellipticals and lenticulars from the spiral galaxies in our sample. These three parameters are the correlation fractal dimension D {sub corr}, the difference between the correlation fractal dimension and the capacity fractal dimension D {sub corr} - D {sub cap}, and, thirdly, the B - V color of the galaxy.
Fractals in biology and medicine
NASA Technical Reports Server (NTRS)
Havlin, S.; Buldyrev, S. V.; Goldberger, A. L.; Mantegna, R. N.; Ossadnik, S. M.; Peng, C. K.; Simons, M.; Stanley, H. E.
1995-01-01
Our purpose is to describe some recent progress in applying fractal concepts to systems of relevance to biology and medicine. We review several biological systems characterized by fractal geometry, with a particular focus on the long-range power-law correlations found recently in DNA sequences containing noncoding material. Furthermore, we discuss the finding that the exponent alpha quantifying these long-range correlations ("fractal complexity") is smaller for coding than for noncoding sequences. We also discuss the application of fractal scaling analysis to the dynamics of heartbeat regulation, and report the recent finding that the normal heart is characterized by long-range "anticorrelations" which are absent in the diseased heart.
ERIC Educational Resources Information Center
Simoson, Andrew J.
2009-01-01
This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)
Real-time pseudocolor coding thermal ghost imaging.
Duan, Deyang; Xia, Yunjie
2014-01-01
In this work, a color ghost image of a black-and-white object is obtained by a real-time pseudocolor coding technique that includes equal spatial frequency pseudocolor coding and equal density pseudocolor coding. This method makes the black-and-white ghost image more conducive to observation. Furthermore, since the ghost imaging comes from the intensity cross-correlations of the two beams, ghost imaging with the real-time pseudocolor coding technique is better than classical optical imaging with the same technique in overcoming the effects of light interference. PMID:24561954
The IRMA code for unique classification of medical images
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.; Schubert, Henning; Keysers, Daniel; Kohnen, Michael; Wein, Berthold B.
2003-05-01
Modern communication standards such as Digital Imaging and Communication in Medicine (DICOM) include non-image data for a standardized description of study, patient, or technical parameters. However, these tags are rather roughly structured, ambiguous, and often optional. In this paper, we present a mono-hierarchical multi-axial classification code for medical images and emphasize its advantages for content-based image retrieval in medical applications (IRMA). Our so called IRMA coding system consists of four axes with three to four positions, each in {0,...9,a,...,z}, where "0" denotes "unspecified" to determine the end of a path along an axis. In particular, the technical code (T) describes the imaging modality; the directional code (D) models body orientations; the anatomical code (A) refers to the body region examined; and the biological code (B) describes the biological system examined. Hence, the entire code results in a character string of not more than 13 characters (IRMA: TTTT - DDD - AAA - BBB). The code can be easily extended by introducing characters in certain code positions, e.g., if new modalities are introduced. In contrast to other approaches, mixtures of one- and two-literal code positions are avoided which simplifies automatic code processing. Furthermore, the IRMA code obviates ambiguities resulting from overlapping code elements within the same level. Although this code was originally designed to be used in the IRMA project, other use of it is welcome.
Coherent diffractive imaging using randomly coded masks
Seaberg, Matthew H.; D'Aspremont, Alexandre; Turner, Joshua J.
2015-12-07
We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even free electron laser experiments.
New freeform NURBS imaging design code
NASA Astrophysics Data System (ADS)
Chrisp, Michael P.
2014-12-01
A new optical imaging design code for NURBS freeform surfaces is described, with reduced optimization cycle times due to its fast raytrace engine, and the ability to handle larger NURBS surfaces because of its improved optimization algorithms. This program, FANO (Fast Accurate NURBS Optimization), is then applied to an f/2 three mirror anastigmat design. Given the same optical design parameters, the optical system with NURBS freeform surfaces has an average r.m.s. spot size of 4 microns. This spot size is six times smaller than the conventional aspheric design, showing that the use of NURBS freeform surfaces can improve the performance of three mirror anastigmats for the next generation of smaller pixel size, larger format detector arrays.
Image analysis of human corneal endothelial cells based on fractal theory
NASA Astrophysics Data System (ADS)
Zhang, Zhi; Luo, Qingming; Zeng, Shaoqun; Zhang, Xinyu; Huang, Dexiu; Chen, Weiguo
1999-09-01
A fast method is developed to quantitatively characterize the shape of human corneal endothelial cells with fractal theory and applied to analyze microscopic photographs of human corneal endothelial cells. The results show that human corneal endothelial cells possess the third characterization parameter-- fractal dimension, besides another two characterization parameter (its size and shape). Compared with tradition method, this method has many advantages, such as automatism, speediness, parallel processing and can be used to analyze large numbers of endothelial cells, the obtained values are statistically significant, it offers a new approach for clinic diagnosis of endothelial cells.
Design of Pel Adaptive DPCM coding based upon image partition
NASA Astrophysics Data System (ADS)
Saitoh, T.; Harashima, H.; Miyakawa, H.
1982-01-01
A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).
New Methods for Lossless Image Compression Using Arithmetic Coding.
ERIC Educational Resources Information Center
Howard, Paul G.; Vitter, Jeffrey Scott
1992-01-01
Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…
Fractal Feature Analysis Of Beef Marblingpatterns
NASA Astrophysics Data System (ADS)
Chen, Kunjie; Qin, Chunfang
The purpose of this study is to investigate fractal behavior of beef marbling patterns and to explore relationships between fractal dimensions and marbling scores. Authors firstly extracted marbling images from beef rib-eye crosssection images using computer image processing technologies and then implemented the fractal analysis on these marbling images based on the pixel covering method. Finally box-counting fractal dimension (BFD) and informational fractal dimension (IFD) of one hundred and thirty-five beef marbling images were calculated and plotted against the beef marbling scores. The results showed that all beef marbling images exhibit fractal behavior over the limited range of scales accessible to analysis. Furthermore, their BFD and IFD are closely related to the score of beef marbling, suggesting that fractal analyses can provide us a potential tool to calibrate the score of beef marbling.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
PynPoint code for exoplanet imaging
NASA Astrophysics Data System (ADS)
Amara, A.; Quanz, S. P.; Akeret, J.
2015-04-01
We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).
Garrison, J.R., Jr.; Pearn, W.C.; von Rosenberg, D. W. )
1992-01-01
In this paper reservoir rock/pore systems are considered natural fractal objects and modeled as and compared to the regular fractal Menger Sponge and Sierpinski Carpet. The physical properties of a porous rock are, in part, controlled by the geometry of the pore system. The rate at which a fluid or electrical current can travel through the pore system of a rock is controlled by the path along which it must travel. This path is a subset of the overall geometry of the pore system. Reservoir rocks exhibit self-similarity over a range of length scales suggesting that fractal geometry offers a means of characterizing these complex objects. The overall geometry of a rock/pore system can be described, conveniently and concisely, in terms of effective fractal dimensions. The rock/pore system is modeled as the fractal Menger Sponge. A cross section through the rock/pore system, such as an image of a thin-section of a rock, is modeled as the fractal Sierpinski Carpet, which is equivalent to the face of the Menger Sponge.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
Coded source neutron imaging at the PULSTAR reactor
Xiao, Ziyu; Mishra, Kaushal; Hawari, Ayman; Bingham, Philip R; Bilheux, Hassina Z; Tobin Jr, Kenneth William
2011-01-01
A neutron imaging facility is located on beam-tube No.5 of the 1-MW PULSTAR reactor at North Carolina State University. An investigation of high resolution imaging using the coded source imaging technique has been initiated at the facility. Coded imaging uses a mosaic of pinholes to encode an aperture, thus generating an encoded image of the object at the detector. To reconstruct the image data received by the detector, the corresponding decoding patterns are used. The optimized design of coded mask is critical for the performance of this technique and will depend on the characteristics of the imaging beam. In this work, a 34 x 38 uniformly redundant array (URA) coded aperture system is studied for application at the PULSTAR reactor neutron imaging facility. The URA pattern was fabricated on a 500 ?m gadolinium sheet. Simulations and experiments with a pinhole object have been conducted using the Gd URA and the optimized beam line.
Wavelet based hierarchical coding scheme for radar image compression
NASA Astrophysics Data System (ADS)
Sheng, Wen; Jiao, Xiaoli; He, Jifeng
2007-12-01
This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.
Target Detection Using Fractal Geometry
NASA Technical Reports Server (NTRS)
Fuller, J. Joseph
1991-01-01
The concepts and theory of fractal geometry were applied to the problem of segmenting a 256 x 256 pixel image so that manmade objects could be extracted from natural backgrounds. The two most important measurements necessary to extract these manmade objects were fractal dimension and lacunarity. Provision was made to pass the manmade portion to a lookup table for subsequent identification. A computer program was written to construct cloud backgrounds of fractal dimensions which were allowed to vary between 2.2 and 2.8. Images of three model space targets were combined with these backgrounds to provide a data set for testing the validity of the approach. Once the data set was constructed, computer programs were written to extract estimates of the fractal dimension and lacunarity on 4 x 4 pixel subsets of the image. It was shown that for clouds of fractal dimension 2.7 or less, appropriate thresholding on fractal dimension and lacunarity yielded a 64 x 64 edge-detected image with all or most of the cloud background removed. These images were enhanced by an erosion and dilation to provide the final image passed to the lookup table. While the ultimate goal was to pass the final image to a neural network for identification, this work shows the applicability of fractal geometry to the problems of image segmentation, edge detection and separating a target of interest from a natural background.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Multiview image and depth map coding for holographic TV system
NASA Astrophysics Data System (ADS)
Senoh, Takanori; Wakunami, Koki; Ichihashi, Yasuyuki; Sasaki, Hisayuki; Oi, Ryutaro; Yamamoto, Kenji
2014-11-01
A holographic TV system based on multiview image and depth map coding and the analysis of coding noise effects in reconstructed images is proposed. A major problem for holographic TV systems is the huge amount of data that must be transmitted. It has been shown that this problem can be solved by capturing a three-dimensional scene with multiview cameras, deriving depth maps from multiview images or directly capturing them, encoding and transmitting the multiview images and depth maps, and generating holograms at the receiver side. This method shows the same subjective image quality as hologram data transmission with about 1/97000 of the data rate. Speckle noise, which masks coding noise when the coded bit rate is not extremely low, is shown to be the main determinant of reconstructed holographic image quality.
Texture Analysis In Cytology Using Fractals
NASA Astrophysics Data System (ADS)
Basu, Santanu; Barba, Joseph; Chan, K. S.
1990-01-01
We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory in the area of medical image analysis for texture description. The specific goal of this research is to utilize "fractal dimension" to discriminate between normal and cancerous human cells. In particular, we have considered four types of cells namely, breast, bronchial, ovarian and uterine. A method based on fractal Brownian motion theory is employed to compute the "fractal dimension" of cells. Experiments with real images reveal that the range of scales over which the cells exhibit fractal property can be used as the discriminatory feature to identify cancerous cells.
Matsueda, Hiroaki; Lee, Ching Hua
2015-03-10
We examine singular value spectrum of a class of two-dimensional fractal images. We find that the spectra can be mapped onto entanglement spectra of free fermions in one dimension. This exact mapping tells us that the singular value decomposition is a way of detecting a holographic relation between classical and quantum systems.
Coded source neutron imaging with a MURA mask
NASA Astrophysics Data System (ADS)
Zou, Y. B.; Schillinger, B.; Wang, S.; Zhang, X. S.; Guo, Z. Y.; Lu, Y. R.
2011-09-01
In coded source neutron imaging the single aperture commonly used in neutron radiography is replaced with a coded mask. Using a coded source can improve the neutron flux at the sample plane when a very high L/ D ratio is needed. The coded source imaging is a possible way to reduce the exposure time to get a neutron image with very high L/ D ratio. A 17×17 modified uniformly redundant array coded source was tested in this work. There are 144 holes of 0.8 mm diameter on the coded source. The neutron flux from the coded source is as high as from a single 9.6 mm aperture, while its effective L/ D is the same as in the case of a 0.8 mm aperture. The Richardson-Lucy maximum likelihood algorithm was used for image reconstruction. Compared to an in-line phase contrast neutron image taken with a 1 mm aperture, it takes much less time for the coded source to get an image of similar quality.
Image coding approach based on multiscale matching pursuits operation
NASA Astrophysics Data System (ADS)
Li, Hui; Wolff, Ingo
1998-12-01
A new image coding technique based on the Multiscale Matching Pursuits (MMP) approach is presented. Using a pre-defined dictionary set, which consists of a limited amount of elements, the MMP approach can decompose/encode images on different image scales and reconstruct/decode the image by the same dictionary. The MMP approach can be used to represent different scale image texture as well as the whole image. Instead of the pixel-based image representation, the MMP method represents the image texture as an index of a dictionary and thereby can encode the image with low data volume. Based on the MMP operation, the image content can be coded in an order from global to local and detail.
Fractal analysis for assessing tumour grade in microscopic images of breast tissue
NASA Astrophysics Data System (ADS)
Tambasco, Mauro; Costello, Meghan; Newcomb, Chris; Magliocco, Anthony M.
2007-03-01
In 2006, breast cancer is expected to continue as the leading form of cancer diagnosed in women, and the second leading cause of cancer mortality in this group. A method that has proven useful for guiding the choice of treatment strategy is the assessment of histological tumor grade. The grading is based upon the mitosis count, nuclear pleomorphism, and tubular formation, and is known to be subject to inter-observer variability. Since cancer grade is one of the most significant predictors of prognosis, errors in grading can affect patient management and outcome. Hence, there is a need to develop a breast cancer-grading tool that is minimally operator dependent to reduce variability associated with the current grading system, and thereby reduce uncertainty that may impact patient outcome. In this work, we explored the potential of a computer-based approach using fractal analysis as a quantitative measure of cancer grade for breast specimens. More specifically, we developed and optimized computational tools to compute the fractal dimension of low- versus high-grade breast sections and found them to be significantly different, 1.3+/-0.10 versus 1.49+/-0.10, respectively (Kolmogorov-Smirnov test, p<0.001). These results indicate that fractal dimension (a measure of morphologic complexity) may be a useful tool for demarcating low- versus high-grade cancer specimens, and has potential as an objective measure of breast cancer grade. Such prognostic value could provide more sensitive and specific information that would reduce inter-observer variability by aiding the pathologist in grading cancers.
Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications
Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth
2013-06-01
Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.
Progressive Image Coding by Hierarchical Linear Approximation.
ERIC Educational Resources Information Center
Wu, Xiaolin; Fang, Yonggang
1994-01-01
Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…
NASA Technical Reports Server (NTRS)
Bruno, B. C.; Taylor, G. J.; Rowland, S. K.; Lucey, P. G.; Self, S.
1992-01-01
Results are presented of a preliminary investigation of the fractal nature of the plan-view shapes of lava flows in Hawaii (based on field measurements and aerial photographs), as well as in Idaho and the Galapagos Islands (using aerial photographs only). The shapes of the lava flow margins are found to be fractals: lava flow shape is scale-invariant. This observation suggests that nonlinear forces are operating in them because nonlinear systems frequently produce fractals. A'a and pahoehoe flows can be distinguished by their fractal dimensions (D). The majority of the a'a flows measured have D between 1.05 and 1.09, whereas the pahoehoe flows generally have higher D (1.14-1.23). The analysis is extended to other planetary bodies by measuring flows from orbital images of Venus, Mars, and the moon. All are fractal and have D consistent with the range of terrestrial a'a and have D consistent with the range of terrestrial a'a and pahoehoe values.
Perceptually lossless coding of digital monochrome ultrasound images
NASA Astrophysics Data System (ADS)
Wu, David; Tan, Damian M.; Griffiths, Tania; Wu, Hong Ren
2005-07-01
A preliminary investigation of encoding monochrome ultrasound images with a novel perceptually lossless coder is presented. Based on the JPEG 2000 coding framework, the proposed coder employs a vision model to identify and remove visually insignificant/irrelevant information. Current simulation results have shown coding performance gains over the JPEG compliant LOCO lossless and JPEG 2000 lossless coders without any perceivable distortion.
Spatially-varying IIR filter banks for image coding
NASA Technical Reports Server (NTRS)
Chung, Wilson C.; Smith, Mark J. T.
1992-01-01
This paper reports on the application of spatially variant infinite impulse response (IIR) filter banks to subband image coding. The new filter bank is based on computationally efficient recursive polyphase decompositions that dynamically change in response to the input signal. In the absence of quantization, reconstruction can be made exact. However, by proper choice of an adaptation scheme, we show that subband image coding based on time varying filter banks can yield improvement over the use of conventional filter banks.
Collaborative Image Coding and Transmission over Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Wu, Min; Chen, Chang Wen
2006-12-01
The imaging sensors are able to provide intuitive visual information for quick recognition and decision. However, imaging sensors usually generate vast amount of data. Therefore, processing and coding of image data collected in a sensor network for the purpose of energy efficient transmission poses a significant technical challenge. In particular, multiple sensors may be collecting similar visual information simultaneously. We propose in this paper a novel collaborative image coding and transmission scheme to minimize the energy for data transmission. First, we apply a shape matching method to coarsely register images to find out maximal overlap to exploit the spatial correlation between images acquired from neighboring sensors. For a given image sequence, we transmit background image only once. A lightweight and efficient background subtraction method is employed to detect targets. Only the regions of target and their spatial locations are transmitted to the monitoring center. The whole image can then be reconstructed by fusing the background and the target images as well as their spatial locations. Experimental results show that the energy for image transmission can indeed be greatly reduced with collaborative image coding and transmission.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Print protection using high-frequency fractal noise
NASA Astrophysics Data System (ADS)
Mahmoud, Khaled W.; Blackledge, Jonathon M.; Datta, Sekharjit; Flint, James A.
2004-06-01
All digital images are band-limited to a degree that is determined by a spatial extent of the point spread function; the bandwidth of the image being determined by the optical transfer function. In the printing industry, the limit is determined by the resolution of the printed material. By band limiting the digital image in such away that the printed document maintains its fidelity, it is possible to use the out-of-band frequency space to introduce low amplitude coded data that remains hidden in the image. In this way, a covert signature can be embedded into an image to provide a digital watermark, which is sensitive to reproduction. In this paper a high frequency fractal noise is used as a low amplitude signal. A statistically robust solution to the authentication of printed material using high-fractal noise is proposed here which is based on cross-entropy metrics to provide a statistical confidence test. The fractal watermark is based on application of self-affine fields, which is suitable for documents containing high degree of texture. In principle, this new approach will allow batch tracking to be performed using coded data that has been embedded into the high frequency components of the image whose statistical characteristics are dependent on the printer/scanner technology. The details of this method as well as experimental results are presented.
Coded Excitation Plane Wave Imaging for Shear Wave Motion Detection
Song, Pengfei; Urban, Matthew W.; Manduca, Armando; Greenleaf, James F.; Chen, Shigao
2015-01-01
Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave signal-to-noise-ratio (SNR) compared to conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2-4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (Body Mass Index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue. PMID:26168181
NASA Astrophysics Data System (ADS)
Martin-Fernandez, Marcos; Alberola-Lopez, Carlos; Guerrero-Rodriguez, David; Ruiz-Alzola, Juan
2000-12-01
In this paper we propose a novel lossless coding scheme for medical images that allows the final user to switch between a lossy and a lossless mode. This is done by means of a progressive reconstruction philosophy (which can be interrupted at will) so we believe that our scheme gives a way to trade off between the accuracy needed for medical diagnosis and the information reduction needed for storage and transmission. We combine vector quantization, run-length bit plane and entropy coding. Specifically, the first step is a vector quantization procedure; the centroid codes are Huffman- coded making use of a set of probabilities that are calculated in the learning phase. The image is reconstructed at the coder in order to obtain the error image; this second image is divided in bit planes, which are then run-length and Huffman coded. A second statistical analysis is performed during the learning phase to obtain the parameters needed in this final stage. Our coder is currently trained for hand-radiographs and fetal echographies. We compare our results for this two types of images to classical results on bit plane coding and the JPEG standard. Our coder turns out to outperform both of them.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sabyasachi; Das, Nandan K.; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2014-05-01
DIC (Differential Interference Contrast Image) images of cervical pre-cancer tissues are taken from epithelium region, on which wavelet transform and multi-fractal analysis are applied. Discrete wavelet transform (DWT) through Daubechies basis are done for identifying fluctuations over polynomial trends for clear characterization and differentiation of tissues. A systematic investigation of denoised images is carried out through the continuous Morlet wavelet. The scalogram reveals the changes in coefficient peak values from grade-I to grade-III. Wavelet normalized energy plots are computed in order to show the difference of periodicity among different grades of cancerous tissues. Using the multi-fractal de-trended fluctuation analysis (MFDFA), it is observed that the values of Hurst exponent and width of singularity spectrum decrease as cancer progresses from grade-I to grade-III tissue.
Code-modulated interferometric imaging system using phased arrays
NASA Astrophysics Data System (ADS)
Chauhan, Vikas; Greene, Kevin; Floyd, Brian
2016-05-01
Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.
NASA Technical Reports Server (NTRS)
Jaggi, S.; Quattrochi, Dale A.; Lam, Nina S.-N.
1993-01-01
Fractal geometry is increasingly becoming a useful tool for modeling natural phenomena. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. The purpose of this paper is to introduce and implement three algorithms in C code for deriving fractal measurement from remotely sensed data. These three methods are: the line-divider method, the variogram method, and the triangular prism method. Remote-sensing data acquired by NASA's Calibrated Airborne Multispectral Scanner (CAMS) are used to compute the fractal dimension using each of the three methods. These data were obtained as a 30 m pixel spatial resolution over a portion of western Puerto Rico in January 1990. A description of the three methods, their implementation in PC-compatible environment, and some results of applying these algorithms to remotely sensed image data are presented.
JPEG 2000 coding of image data over adaptive refinement grids
NASA Astrophysics Data System (ADS)
Gamito, Manuel N.; Dias, Miguel S.
2003-06-01
An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.
Adaptive coded aperture imaging: progress and potential future applications
NASA Astrophysics Data System (ADS)
Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.
2011-09-01
Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).
Coded aperture imaging for fluorescent x-rays
Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.
2014-06-15
We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.
Imaging The Genetic Code of a Virus
NASA Astrophysics Data System (ADS)
Graham, Jenna; Link, Justin
2013-03-01
Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University
Waliszewski, Przemyslaw
2016-01-01
The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken’s embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed
Waliszewski, Przemyslaw
2016-01-01
The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken's embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed
Hybrid Compton camera/coded aperture imaging system
Mihailescu, Lucian; Vetter, Kai M.
2012-04-10
A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.
Coded aperture design in mismatched compressive spectral imaging.
Galvis, Laura; Arguello, Henry; Arce, Gonzalo R
2015-11-20
Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique. PMID:26836551
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-04-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
Lung cancer-a fractal viewpoint.
Lennon, Frances E; Cianci, Gianguido C; Cipriani, Nicole A; Hensing, Thomas A; Zhang, Hannah J; Chen, Chin-Tu; Murgu, Septimiu D; Vokes, Everett E; Vannier, Michael W; Salgia, Ravi
2015-11-01
Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed. PMID:26169924
Lung cancer—a fractal viewpoint
Lennon, Frances E.; Cianci, Gianguido C.; Cipriani, Nicole A.; Hensing, Thomas A.; Zhang, Hannah J.; Chen, Chin-Tu; Murgu, Septimiu D.; Vokes, Everett E.; W. Vannier, Michael; Salgia, Ravi
2016-01-01
Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed. PMID:26169924
Combined-transform coding scheme for medical images
NASA Astrophysics Data System (ADS)
Zhang, Ya-Qin; Loew, Murray H.; Pickholtz, Raymond L.
1991-06-01
Transform coding has been used successfully for radiological image compression in the picture archival and communication system (PACS) and other applications. However, it suffers from the artifact known as 'blocking effect' due to division of subblocks, which is very undesirable in the clinical environment. In this paper, we propose a combined-transform coding (CTC) scheme to reduce this effect and achieve better subjective performance. In the combined- transform coding scheme, we first divide the image into two sets that have different correlation properties, namely the upper image set (UIS) and lower image set (LIS). The UIS contains the most significant information and more correlation, and the LIS contains the less significant information. The UIS is compressed noiselessly without dividing into blocks and the LIS is coded by conventional block transform coding. Since the correlation in UIS is largely reduced (without distortion), the inter-block correlation, and hence the 'blocking effect,' is significantly reduced. This paper first describes the proposed CTC scheme and investigates its information-theoretic properties. Then, computer simulation results for a class of AP view chest x-ray images are presented. The comparison between the CTC scheme and conventional Discrete Cosine Transform (DCT) and Discrete Walsh-Hadmad Transform (DWHT) is made to demonstrate the performance improvement of the proposed scheme. The advantages of the proposed CTC scheme also include (1) no ringing effect due to no error propagation across the boundary, (2) no additional computation and (3) the ability to hold distortion below a certain threshold. In addition, we found that the idea of combined-coding can also be used in noiseless coding, and slight improvement in the compression performance can also be achieved if used properly. Finally, we point out that this scheme has its advantages in medical image transmission over a noisy channel or the packet-switched network in case of
Split field coding: low complexity error-resilient entropy coding for image compression
NASA Astrophysics Data System (ADS)
Meany, James J.; Martens, Christopher J.
2008-08-01
In this paper, we describe split field coding, an approach for low complexity, error-resilient entropy coding which splits code words into two fields: a variable length prefix and a fixed length suffix. Once a prefix has been decoded correctly, then the associated fixed length suffix is error-resilient, with bit errors causing no loss of code word synchronization and only a limited amount of distortion on the decoded value. When the fixed length suffixes are segregated to a separate block, this approach becomes suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. Split field coding is demonstrated in the context of a wavelet-based image codec, with examples of various error resilience properties, and comparisons to the rate-distortion and computational performance of JPEG 2000.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Barker-coded excitation in ophthalmological ultrasound imaging
Zhou, Sheng; Wang, Xiao-Chun; Yang, Jun; Ji, Jian-Jun; Wang, Yan-Qun
2014-01-01
High-frequency ultrasound is an attractive means to obtain fine-resolution images of biological tissues for ophthalmologic imaging. To solve the tradeoff between axial resolution and detection depth, existing in the conventional single-pulse excitation, this study develops a new method which uses 13-bit Barker-coded excitation and a mismatched filter for high-frequency ophthalmologic imaging. A novel imaging platform has been designed after trying out various encoding methods. The simulation and experiment result show that the mismatched filter can achieve a much higher out signal main to side lobe which is 9.7 times of the matched one. The coded excitation method has significant advantages over the single-pulse excitation system in terms of a lower MI, a higher resolution, and a deeper detection depth, which improve the quality of ophthalmic tissue imaging. Therefore, this method has great values in scientific application and medical market. PMID:25356093
Investigation into How 8th Grade Students Define Fractals
ERIC Educational Resources Information Center
Karakus, Fatih
2015-01-01
The analysis of 8th grade students' concept definitions and concept images can provide information about their mental schema of fractals. There is limited research on students' understanding and definitions of fractals. Therefore, this study aimed to investigate the elementary students' definitions of fractals based on concept image and concept…
Image coding by way of wavelets
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1993-01-01
The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.
Multiple wavelet-tree-based image coding and robust transmission
NASA Astrophysics Data System (ADS)
Cao, Lei; Chen, Chang Wen
2004-10-01
In this paper, we present techniques based on multiple wavelet-tree coding for robust image transmission. The algorithm of set partitioning in hierarchical trees (SPIHT) is a state-of-the-art technique for image compression. This variable length coding (VLC) technique, however, is extremely sensitive to channel errors. To improve the error resilience capability and in the meantime to keep the high source coding efficiency through VLC, we propose to encode each wavelet tree or a group of wavelet trees using SPIHT algorithm independently. Instead of encoding the entire image as one bitstream, multiple bitstreams are generated. Therefore, error propagation is limited within individual bitstream. Two methods based on subsampling and human visual sensitivity are proposed to group the wavelet trees. The multiple bitstreams are further protected by the rate compatible puncture convolutional (RCPC) codes. Unequal error protection are provided for both different bitstreams and different bit segments inside each bitstream. We also investigate the improvement of error resilience through error resilient entropy coding (EREC) and wavelet tree coding when channels are slightly corruptive. A simple post-processing technique is also proposed to alleviate the effect of residual errors. We demonstrate through simulations that systems with these techniques can achieve much better performance than systems transmitting a single bitstream in noisy environments.
Distributed wavefront coding for wide angle imaging system
NASA Astrophysics Data System (ADS)
Larivière-Bastien, Martin; Zhang, Hu; Thibault, Simon
2011-10-01
The emerging paradigm of imaging systems, known as wavefront coding, which employs joint optimization of both the optical system and the digital post-processing system, has not only increased the degrees of design freedom but also brought several significant system-level benefits. The effectiveness of wavefront coding has been demonstrated by several proof-of-concept systems in the reduction of focus-related aberrations and extension of depth of focus. While previous research on wavefront coding was mainly targeted at imaging systems having a small or modest field of view (FOV), we present a preliminary study on wavefront coding applied to panoramic optical systems. Unlike traditional wavefront coding systems, which only require the constancy of the modulation transfer function (MTF) over an extended focus range, wavefront-coded panoramic systems particularly emphasize the mitigation of significant off-axis aberrations such as field curvature, coma, and astigmatism. The restrictions of using a traditional generalized cubic polynomial pupil phase mask for wide angle systems are studied in this paper. It is shown that a traditional approach can be used when the variation of the off-axis aberrations remains modest. Consequently, we propose to study how a distributed wavefront coding approach, where two surfaces are used for encoding the wavefront, can be applied to wide angle lenses. A few cases designed using Zemax are presented and discussed
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
Medical image registration using sparse coding of image patches.
Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid
2016-06-01
Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing. PMID:27085311
Construction of fractal nanostructures based on Kepler-Shubnikov nets
NASA Astrophysics Data System (ADS)
Ivanov, V. V.; Talanov, V. M.
2013-05-01
A system of information codes for deterministic fractal lattices and sets of multifractal curves is proposed. An iterative modular design was used to obtain a series of deterministic fractal lattices with generators in the form of fragments of 2 D structures and a series of multifractal curves (based on some Kepler-Shubnikov nets) having Cantor set properties. The main characteristics of fractal structures and their lacunar spectra are determined. A hierarchical principle is formulated for modules of regular fractal structures.
Construction of fractal nanostructures based on Kepler-Shubnikov nets
Ivanov, V. V. Talanov, V. M.
2013-05-15
A system of information codes for deterministic fractal lattices and sets of multifractal curves is proposed. An iterative modular design was used to obtain a series of deterministic fractal lattices with generators in the form of fragments of 2D structures and a series of multifractal curves (based on some Kepler-Shubnikov nets) having Cantor set properties. The main characteristics of fractal structures and their lacunar spectra are determined. A hierarchical principle is formulated for modules of regular fractal structures.
A Brief Historical Introduction to Fractals and Fractal Geometry
ERIC Educational Resources Information Center
Debnath, Lokenath
2006-01-01
This paper deals with a brief historical introduction to fractals, fractal dimension and fractal geometry. Many fractals including the Cantor fractal, the Koch fractal, the Minkowski fractal, the Mandelbrot and Given fractal are described to illustrate self-similar geometrical figures. This is followed by the discovery of dynamical systems and…
NASA Astrophysics Data System (ADS)
Kimura, Michio; Kuranishi, Makoto; Sukenobu, Yoshiharu; Watanabe, Hiroki; Nakajima, Takashi; Morimura, Shinya; Kabata, Shun
2002-05-01
The DICOM standard includes non-image data information such as image study ordering data and performed procedure data, which are used for sharing information between HIS/RIS/PACS/modalities, which is essential for IHE. In order to bring such parts of the DICOM standard into force in Japan, a joint committee of JIRA and JAHIS (vendor associations) established JJ1017 management guideline. It specifies, for example, which items are legally required in Japan while remaining optional in the DICOM standard. Then, what should be used for the examination type, regional, and directional codes? Our investigation revealed that DICOM tables do not include items that are sufficiently detailed for use in Japan. This is because radiology departments (radiologists) in the US exercise greater discretion in image examination than in Japan, and the contents of orders from requesting physicians do not include the extra details used in Japan. Therefore, we have generated the JJ1017 code for these 3 codes for use based on the JJ1017 guidelines. The stem part of the JJ1017 code partially employs the DICOM codes in order to remain in line with the DICOM standard. JJ1017 codes are to be included not only in IHE-J specifications, also in Ministry recommendations of health data exchange.
Coding of images by methods of a spline interpolation
NASA Astrophysics Data System (ADS)
Kozhemyako, Vladimir P.; Maidanuik, V. P.; Etokov, I. A.; Zhukov, Konstantin M.; Jorban, Saleh R.
2000-06-01
In the case of image coding are containing interpolation methods, a linear methods of component forming usually used. However, taking in account the huge speed increasing of a computer and hardware integration power, of special interest was more complicated interpolation methods, in particular spline interpolation. A spline interpolation is known to be a approximation that performed by spline, which consist of polynomial bounds, where a cub parabola usually used. At this article is to perform image analysis by 5 X 5 aperture, result in count rejection of low-frequence component of image: an one base count per 5 X 5 size fragment. The passed source counts were restoring by spline interpolation methods, then formed counts of high-frequence image component, by subtract from counts of initial image a low-frequence component and their quantization. At the final stage Huffman coding performed to divert of statistical redundancy. Spacious set of experiments with various images showed that source compression factor may be founded into limits of 10 - 70, which for majority test images are superlative source compression factor by JPEG standard applications at the same image quality. Investigated research show that spline approximation allow to improve restored image quality and compression factor to compare with linear interpolation. Encoding program modules has work out for BMP-format files, on the Windows and MS-DOS platforms.
Investigation of the image coding method for three-dimensional range-gated imaging
NASA Astrophysics Data System (ADS)
Laurenzis, Martin; Bacher, Emmanuel; Schertzer, Stéphane; Christnacher, Frank
2011-11-01
In this publication we investigate the image coding method for 3D range-gated imaging. This method is based on multiple exposure of range-gated images to enable a coding of ranges in a limited number of images. For instance, it is possible to enlarge the depth mapping range by a factor of 12 by the utilization of 3 images and specific 12T image coding sequences. Further, in this paper we present a node-model to determine the coding sequences and to dramatically reduce the time of calculation of the number of possible sequences. Finally, we demonstrate and discuss the application of 12T sequences with different clock periods T = 200 ns to 400 ns.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
Compressing industrial computed tomography images by means of contour coding
NASA Astrophysics Data System (ADS)
Jiang, Haina; Zeng, Li
2013-10-01
An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.
Digital Image Analysis for DETCHIP(®) Code Determination.
Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby
2012-08-01
DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940
NASA Astrophysics Data System (ADS)
Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Garazdiuk, M.; Motrich, A. V.
2014-08-01
This research presents the results of investigation of laser polarization fluorescence of biological layers (histological sections of the myocardium). The polarized structure of autofluorescence imaging layers of biological tissues was detected and investigated. Proposed the model of describing the formation of polarization inhomogeneous of autofluorescence imaging biological optically anisotropic layers. On this basis, analytically and experimentally tested to justify the method of laser polarimetry autofluorescent. Analyzed the effectiveness of this method in the postmortem diagnosis of infarction. The objective criteria (statistical moments) of differentiation of autofluorescent images of histological sections myocardium were defined. The operational characteristics (sensitivity, specificity, accuracy) of these technique were determined.
NASA Astrophysics Data System (ADS)
Wuorinen, Charles
2015-03-01
Any of the arts may produce exemplars that have fractal characteristics. There may be fractal painting, fractal poetry, and the like. But these will always be specific instances, not necessarily displaying intrinsic properties of the art-medium itself. Only music, I believe, of all the arts possesses an intrinsically fractal character, so that its very nature is fractally determined. Thus, it is reasonable to assert that any instance of music is fractal...
Skeleton-based morphological coding of binary images.
Kresch, R; Malah, D
1998-01-01
This paper presents new properties of the discrete morphological skeleton representation of binary images, along with a novel coding scheme for lossless binary image compression that is based on these properties. Following a short review of the theoretical background, two sets of new properties of the discrete morphological skeleton representation of binary images are proved. The first one leads to the conclusion that only the radii of skeleton points belonging to a subset of the ultimate erosions are needed for perfect reconstruction. This corresponds to a lossless sampling of the quench function. The second set of new properties is related to deterministic prediction of skeletonal information in a progressive transmission scheme. Based on the new properties, a novel coding scheme for binary images is presented. The proposed scheme is suitable for progressive transmission and fast implementation. Computer simulations, also presented, show that the proposed coding scheme substantially improves the results obtained by previous skeleton-based coders, and performs better than classical coders, including run-length/Huffman, quadtree, and chain coders. For facsimile images, its performance can be placed between the modified read (MR) method (K=4) and modified modified read (MMR) method. PMID:18276206
Fractal signatures in the aperiodic Fibonacci grating.
Verma, Rupesh; Banerjee, Varsha; Senthilkumaran, Paramasivam
2014-05-01
The Fibonacci grating (FbG) is an archetypal example of aperiodicity and self-similarity. While aperiodicity distinguishes it from a fractal, self-similarity identifies it with a fractal. Our paper investigates the outcome of these complementary features on the FbG diffraction profile (FbGDP). We find that the FbGDP has unique characteristics (e.g., no reduction in intensity with increasing generations), in addition to fractal signatures (e.g., a non-integer fractal dimension). These make the Fibonacci architecture potentially useful in image forming devices and other emerging technologies. PMID:24784044
Interplay between JPEG-2000 image coding and quality estimation
NASA Astrophysics Data System (ADS)
Pinto, Guilherme O.; Hemami, Sheila S.
2013-03-01
Image quality and utility estimators aspire to quantify the perceptual resemblance and the usefulness of a distorted image when compared to a reference natural image, respectively. Image-coders, such as JPEG-2000, traditionally aspire to allocate the available bits to maximize the perceptual resemblance of the compressed image when compared to a reference uncompressed natural image. Specifically, this can be accomplished by allocating the available bits to minimize the overall distortion, as computed by a given quality estimator. This paper applies five image quality and utility estimators, SSIM, VIF, MSE, NICE and GMSE, within a JPEG-2000 encoder for rate-distortion optimization to obtain new insights on how to improve JPEG-2000 image coding for quality and utility applications, as well as to improve the understanding about the quality and utility estimators used in this work. This work develops a rate-allocation algorithm for arbitrary quality and utility estimators within the Post- Compression Rate-Distortion Optimization (PCRD-opt) framework in JPEG-2000 image coding. Performance of the JPEG-2000 image coder when used with a variety of utility and quality estimators is then assessed. The estimators fall into two broad classes, magnitude-dependent (MSE, GMSE and NICE) and magnitudeindependent (SSIM and VIF). They further differ on their use of the low-frequency image content in computing their estimates. The impact of these computational differences is analyzed across a range of images and bit rates. In general, performance of the JPEG-2000 coder below 1.6 bits/pixel with any of these estimators is highly content dependent, with the most relevant content being the amount of texture in an image and whether the strongest gradients in an image correspond to the main contours of the scene. Above 1.6 bits/pixel, all estimators produce visually equivalent images. As a result, the MSE estimator provides the most consistent performance across all images, while specific
Image Coding By Vector Quantization In A Transformed Domain
NASA Astrophysics Data System (ADS)
Labit, C.; Marescq, J. P...
1986-05-01
Using vector quantization in a transformed domain, TV images are coded. The method exploit spatial redundancies of small 4x4 blocks of pixel : first, a DCT (or Hadamard) trans-form is performed on these blocks. A classification algorithm ranks them into visual and transform properties-based classes. For each class, high energy carrying coefficients are retained and using vector quantization, a codebook is built for the AC remaining part of the transformed blocks. The whole of the codeworks are referenced by an index. Each block is then coded by specifying its DC coefficient and associated index.
Colored coded-apertures for spectral image unmixing
NASA Astrophysics Data System (ADS)
Vargas, Hector M.; Arguello Fuentes, Henry
2015-10-01
Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.
Depth-controlled 3D TV image coding
NASA Astrophysics Data System (ADS)
Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo
1998-04-01
Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.
High transparency coded apertures in planar nuclear medicine imaging.
Starfield, David M; Rubin, David M; Marwala, Tshilidzi
2007-01-01
Coded apertures provide an alternative to the collimators of nuclear medicine imaging, and advances in the field have lessened the artifacts that are associated with the near-field geometry. Thickness of the aperture material, however, results in a decoded image with thickness artifacts, and constrains both image resolution and the available manufacturing techniques. Thus in theory, thin apertures are clearly desirable, but high transparency leads to a loss of contrast in the recorded data. Coupled with the quantization effects of detectors, this leads to significant noise in the decoded image. This noise must be dependent on the bit-depth of the gamma camera. If there are a sufficient number of measurable values, high transparency need not adversely affect the signal-to-noise ratio. This novel hypothesis is tested by means of a ray-tracing computer simulator. The simulation results presented in the paper show that replacing a highly opaque coded aperture with a highly transparent aperture, simulated with an 8-bit gamma camera, worsens the root-mean-square error measurement. However, when simulated with a 16-bit gamma camera, a highly transparent coded aperture significantly reduces both thickness artifacts and the root-mean-square error measurement. PMID:18002997
Microbialites on Mars: a fractal analysis of the Athena's microscopic images
NASA Astrophysics Data System (ADS)
Bianciardi, G.; Rizzo, V.; Cantasano, N.
2015-10-01
The Mars Exploration Rovers investigated Martian plains where laminated sedimentary rocks are present. The Athena morphological investigation [1] showed microstructures organized in intertwined filaments of microspherules: a texture we have also found on samples of terrestrial (biogenic) stromatolites and other microbialites and not on pseudo-abiogenicstromatolites. We performed a quantitative image analysis in order to compare 50 microbialites images with 50 rovers (Opportunity and Spirit) ones (approximately 30,000/30,000 microstructures). Contours were extracted and morphometric indexes obtained: geometric and algorithmic complexities, entropy, tortuosity, minimum and maximum diameters. Terrestrial and Martian textures resulted multifractals. Mean values and confidence intervals from the Martian images overlapped perfectly with those from terrestrial samples. The probability of this occurring by chance was less than 1/28, p<0.004. Our work show the evidence of a widespread presence of microbialites in the Martian outcroppings: i.e., the presence of unicellular life on the ancient Mars, when without any doubt, liquid water flowed on the Red Planet.
Fractal Characterization of Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Qiu, Hon-Iie; Lam, Nina Siu-Ngan; Quattrochi, Dale A.; Gamon, John A.
1999-01-01
Two Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral images selected from the Los Angeles area, one representing urban and the other, rural, were used to examine their spatial complexity across their entire spectrum of the remote sensing data. Using the ICAMS (Image Characterization And Modeling System) software, we computed the fractal dimension values via the isarithm and triangular prism methods for all 224 bands in the two AVIRIS scenes. The resultant fractal dimensions reflect changes in image complexity across the spectral range of the hyperspectral images. Both the isarithm and triangular prism methods detect unusually high D values on the spectral bands that fall within the atmospheric absorption and scattering zones where signature to noise ratios are low. Fractal dimensions for the urban area resulted in higher values than for the rural landscape, and the differences between the resulting D values are more distinct in the visible bands. The triangular prism method is sensitive to a few random speckles in the images, leading to a lower dimensionality. On the contrary, the isarithm method will ignore the speckles and focus on the major variation dominating the surface, thus resulting in a higher dimension. It is seen where the fractal curves plotted for the entire bandwidth range of the hyperspectral images could be used to distinguish landscape types as well as for screening noisy bands.
Coded aperture imaging with a HURA coded aperture and a discrete pixel detector
NASA Astrophysics Data System (ADS)
Byard, Kevin
An investigation into the gamma ray imaging properties of a hexagonal uniformly redundant array (HURA) coded aperture and a detector consisting of discrete pixels constituted the major research effort. Such a system offers distinct advantages for the development of advanced gamma ray astronomical telescopes in terms of the provision of high quality sky images in conjunction with an imager plane which has the capacity to reject background noise efficiently. Much of the research was performed as part of the European Space Agency (ESA) sponsored study into a prospective space astronomy mission, GRASP. The effort involved both computer simulations and a series of laboratory test images. A detailed analysis of the system point spread function (SPSF) of imaging planes which incorporate discrete pixel arrays is presented and the imaging quality quantified in terms of the signal to noise ratio (SNR). Computer simulations of weak point sources in the presence of detector background noise were also investigated. Theories developed during the study were evaluated by a series of experimental measurements with a Co-57 gamma ray point source, an Anger camera detector, and a rotating HURA mask. These tests were complemented by computer simulations designed to reproduce, as close as possible, the experimental conditions. The 60 degree antisymmetry property of HURA's was also employed to remove noise due to detector systematic effects present in the experimental images, and rendered a more realistic comparison of the laboratory tests with the computer simulations. Plateau removal and weighted deconvolution techniques were also investigated as methods for the reduction of the coding error noise associated with the gamma ray images.
Investigating Lossy Image Coding Using the PLHaar Transform
Senecal, J G; Lindstrom, P; Duchaineau, M A; Joy, K I
2004-11-16
We developed the Piecewise-Linear Haar (PLHaar) transform, an integer wavelet-like transform. PLHaar does not have dynamic range expansion, i.e. it is an n-bit to n-bit transform. To our knowledge PLHaar is the only reversible n-bit to n-bit transform that is suitable for lossy and lossless coding. We are investigating PLHaar's use in lossy image coding. Preliminary results from thresholding transform coefficients show that PLHaar does not produce objectionable artifacts like prior n-bit to n-bit transforms, such as the transform of Chao et al. (CFH). Also, at lower bitrates PLHaar images have increased contrast. For a given set of CFH and PLHaar coefficients with equal entropy, the PLHaar reconstruction is more appealing, although the PSNR may be lower.
Preconditioning for multiplexed imaging with spatially coded PSFs.
Horisaki, Ryoichi; Tanida, Jun
2011-06-20
We propose a preconditioning method to improve the convergence of iterative reconstruction algorithms in multiplexed imaging based on convolution-based compressive sensing with spatially coded point spread functions (PSFs). The system matrix is converted to improve the condition number with a preconditioner matrix. The preconditioner matrix is calculated by Tikhonov regularization in the frequency domain. The method was demonstrated with simulations and an experiment involving a range detection system with a grating based on the multiplexed imaging framework. The results of the demonstrations showed improved reconstruction fidelity by using the proposed preconditioning method. PMID:21716495
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
157km BOTDA with pulse coding and image processing
NASA Astrophysics Data System (ADS)
Qian, Xianyang; Wang, Zinan; Wang, Song; Xue, Naitian; Sun, Wei; Zhang, Li; Zhang, Bin; Rao, Yunjiang
2016-05-01
A repeater-less Brillouin optical time-domain analyzer (BOTDA) with 157.68km sensing range is demonstrated, using the combination of random fiber laser Raman pumping and low-noise laser-diode-Raman pumping. With optical pulse coding (OPC) and Non Local Means (NLM) image processing, temperature sensing with +/-0.70°C uncertainty and 8m spatial resolution is experimentally demonstrated. The image processing approach has been proved to be compatible with OPC, and it further increases the figure-of-merit (FoM) of the system by 57%.
Correlated statistical uncertainties in coded-aperture imaging
NASA Astrophysics Data System (ADS)
Fleenor, Matthew C.; Blackston, Matthew A.; Ziock, Klaus P.
2015-06-01
In nuclear security applications, coded-aperture imagers can provide a wealth of information regarding the attributes of both the radioactive and nonradioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be determined in a quantitative and statistically meaningful manner. To address a deficiency of quantifiable errors in coded-aperture imaging, we present uncertainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti-mask data are included. Utilizing simulated point source data, we found that correlations arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations was heightened by the process of over-sampling, while correlations were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explored how statistics-based alarming is impacted in a radiological search scenario.
Techniques for region coding in object-based image compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2004-01-01
Object-based compression (OBC) is an emerging technology that combines region segmentation and coding to produce a compact representation of a digital image or video sequence. Previous research has focused on a variety of segmentation and representation techniques for regions that comprise an image. The author has previously suggested [1] partitioning of the OBC problem into three steps: (1) region segmentation, (2) region boundary extraction and compression, and (3) region contents compression. A companion paper [2] surveys implementationally feasible techniques for boundary compression. In this paper, we analyze several strategies for region contents compression, including lossless compression, lossy VPIC, EPIC, and EBLAST compression, wavelet-based coding (e.g., JPEG-2000), as well as texture matching approaches. This paper is part of a larger study that seeks to develop highly efficient compression algorithms for still and video imagery, which would eventually support automated object recognition (AOR) and semantic lookup of images in large databases or high-volume OBC-format datastreams. Example applications include querying journalistic archives, scientific or medical imaging, surveillance image processing and target tracking, as well as compression of video for transmission over the Internet. Analysis emphasizes time and space complexity, as well as sources of reconstruction error in decompressed imagery.
Correlated Statistical Uncertainties in Coded-Aperture Imaging
Fleenor, Matthew C; Blackston, Matthew A; Ziock, Klaus-Peter
2014-01-01
In nuclear security applications, coded-aperture imagers provide the opportu- nity for a wealth of information regarding the attributes of both the radioac- tive and non-radioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be deter- mined in a quantitative and statistically meaningful manner. To address the deficiency of quantifiable errors in coded-aperture imaging, we present uncer- tainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti- mask data are included. Utilizing simulated point source data, we found that correlations (and inverse correlations) arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations (and their inverses) was heightened by the process of over-sampling, while correla- tions were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explore how statistics-based alarming in nuclear security is impacted.
Optimization of a coded aperture coherent scatter spectral imaging system for medical imaging
NASA Astrophysics Data System (ADS)
Greenberg, Joel A.; Lakshmanan, Manu N.; Brady, David J.; Kapadia, Anuj J.
2015-03-01
Coherent scatter X-ray imaging is a technique that provides spatially-resolved information about the molecular structure of the material under investigation, yielding material-specific contrast that can aid medical diagnosis and inform treatment. In this study, we demonstrate a coherent-scatter imaging approach based on the use of coded apertures (known as coded aperture coherent scatter spectral imaging1, 2) that enables fast, dose-efficient, high-resolution scatter imaging of biologically-relevant materials. Specifically, we discuss how to optimize a coded aperture coherent scatter imaging system for a particular set of objects and materials, describe and characterize our experimental system, and use the system to demonstrate automated material detection in biological tissue.
Hyperspectral pixel classification from coded-aperture compressive imaging
NASA Astrophysics Data System (ADS)
Ramirez, Ana; Arce, Gonzalo R.; Sadler, Brian M.
2012-06-01
This paper describes a new approach and its associated theoretical performance guarantees for supervised hyperspectral image classification from compressive measurements obtained by a Coded Aperture Snapshot Spectral Imaging System (CASSI). In one snapshot, the two-dimensional focal plane array (FPA) in the CASSI system captures the coded and spectrally dispersed source field of a three-dimensional data cube. Multiple snapshots are used to construct a set of compressive spectral measurements. The proposed approach is based on the concept that each pixel in the hyper-spectral image lies in a low-dimensional subspace obtained from the training samples, and thus it can be represented as a sparse linear combination of vectors in the given subspace. The sparse vector representing the test pixel is then recovered from the set of compressive spectral measurements and it is used to determine the class label of the test pixel. The theoretical performance bounds of the classifier exploit the distance preservation condition satisfied by the multiple shot CASSI system and depend on the number of measurements collected, code aperture pattern, and similarity between spectral signatures in the dictionary. Simulation experiments illustrate the performance of the proposed classification approach.
Quantization table design revisited for image/video coding.
Yang, En-Hui; Sun, Chang; Meng, Jin
2014-11-01
Quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches, where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior. Guided by this new design principle, we propose an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image coding. When applied to standard JPEG encoding, it provides more than 1.5-dB performance gain in PSNR, with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5-dB gain in PSNR with computational complexity reduced by a factor of more than 2000 when SDQ is OFF, and a 0.2-dB performance gain or more with 85% of the complexity reduced when SDQ is ON. Significant compression performance improvement is also seen when the algorithm is applied to other image coding systems proposed in the literature. PMID:25248184
Multiscale differential fractal feature with application to target detection
NASA Astrophysics Data System (ADS)
Shi, Zelin; Wei, Ying; Huang, Shabai
2004-07-01
A multiscale differential fractal feature of an image is proposed and a small target detection method from complex nature clutter is presented. Considering the speciality that the fractal features of man-made objects change much more violently than that of nature's when the scale is varied, fractal features at multiple scales used for distinguishing man-made target from nature clutter should have more advantages over standard fractal dimensions. Multiscale differential fractal dimensions are deduced from typical fractal model and standard covering-blanket method is improved and used to estimate multiscale fractal dimensions. A multiscale differential fractal feature is defined as the variation of fractal dimensions between two scales at a rational scale range. It can stand out the fractal feature of man-made object from natural clutters much better than the fractal dimension by standard covering-blanket method. Meanwhile, the calculation and the storage amount are reduced greatly, they are 4/M and 2/M that of the standard covering-blanket method respectively (M is scale). In the image of multiscale differential fractal feature, local gray histogram statistical method is used for target detection. Experiment results indicate that this method is suitable for both kinds background of land and sea. It also can be appropriate in both kinds of infrared and TV images, and can detect small targets from a single frame correctly. This method is with high speed and is easy to be implemented.
Magnetohydrodynamics of fractal media
Tarasov, Vasily E.
2006-05-15
The fractal distribution of charged particles is considered. An example of this distribution is the charged particles that are distributed over the fractal. The fractional integrals are used to describe fractal distribution. These integrals are considered as approximations of integrals on fractals. Typical turbulent media could be of a fractal structure and the corresponding equations should be changed to include the fractal features of the media. The magnetohydrodynamics equations for fractal media are derived from the fractional generalization of integral Maxwell equations and integral hydrodynamics (balance) equations. Possible equilibrium states for these equations are considered.
Fractal vector optical fields.
Pan, Yue; Gao, Xu-Zhen; Cai, Meng-Qiang; Zhang, Guan-Lin; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian
2016-07-15
We introduce the concept of a fractal, which provides an alternative approach for flexibly engineering the optical fields and their focal fields. We propose, design, and create a new family of optical fields-fractal vector optical fields, which build a bridge between the fractal and vector optical fields. The fractal vector optical fields have polarization states exhibiting fractal geometry, and may also involve the phase and/or amplitude simultaneously. The results reveal that the focal fields exhibit self-similarity, and the hierarchy of the fractal has the "weeding" role. The fractal can be used to engineer the focal field. PMID:27420485
Fractal Structure in Human Cerebellum Measured by MRI
NASA Astrophysics Data System (ADS)
Zhang, Luduan; Yue, Guang; Brown, Robert; Liu, Jingzhi
2003-10-01
Fractal dimension has been used to quantify the structures of a wide range of objects in biology and medicine. We measured fractal dimension of human cerebellum (CB) in magnetic resonance images of 24 healthy young subjects (12 men, 12 women). CB images were resampled to a series of image sets with different three-dimensional resolutions. At each resolution, the skeleton of the CB white matter was obtained and the number of pixels belonging to the skeleton was determined. Fractal dimension of the CB skeleton was calculated using the box-counting method. The results indicated that the CB skeleton is a highly fractal structure, with a fractal dimension of 2.57+/-0.01. No significant difference in the CB fractal dimension was observed between men and women. Fractal dimension may serve as a quantitative index for structural complexity of the CB at its developmental, degenerative, or evolutionary stages.
Wavelength and code-division multiplexing in diffuse optical imaging
NASA Astrophysics Data System (ADS)
Ascari, Luca; Berrettini, Gianluca; Iannaccone, Sandro; Giacalone, Matteo; Contini, Davide; Spinelli, Lorenzo; Trivella, Maria Giovanna; L'Abbate, Antonio; Potí, Luca
2011-02-01
We recently applied time domain near infrared diffuse optical spectroscopy (TD-NIRS) to monitor hemodynamics of the cardiac wall (oxy and desoxyhemoglobin concentration, saturation, oedema) on anesthetized swine models. Published results prove that NIRS signal can provide information on myocardial hemodynamic parameters not obtainable with conventional diagnostic clinical tools.1 Nevertheless, the high cost of equipment, acquisition length, sensitivity to ambient light are factors limiting its clinical adoption. This paper introduces a novel approach, based on the use of wavelength and code division multiplexing, applicable to TD-NIRS as well as diffuse optical imaging systems (both topography and tomography); the approach, called WS-CDM (wavelength and space code division mltiplexing), essentially consists of a double stage intensity modulation of multiwavelength CW laser sources using orthogonal codes and their parallel correlation-based decoding after propagation in the tissue; it promises better signal to noise ratio (SNR), higher acquisition speed, robustness to ambient light and lower costs compared to both the conventional systems and the more recent spread spectrum approach based on single modulation with pseudo-random bit sequences (PRBS).2 Parallel acquisition of several wavelengths and from several locations is achievable. TD-NIRS experimental results guided Matlab-based simulations aimed at correlating different coding sequences, lengths, spectrum spreading factor, with the WS-CDM performances on such tissues (achievable SNR, acquisition and reconstruction speed, robustness to channel inequalization, ...). Simulations results and preliminary experimental validation confirm the significant improvements that WS-CDM could bring to diffuse optical imaging (not limited to cardiac functional imaging).
Coded-aperture Raman imaging for standoff explosive detection
NASA Astrophysics Data System (ADS)
McCain, Scott T.; Guenther, B. D.; Brady, David J.; Krishnamurthy, Kalyani; Willett, Rebecca
2012-06-01
This paper describes the design of a deep-UV Raman imaging spectrometer operating with an excitation wavelength of 228 nm. The designed system will provide the ability to detect explosives (both traditional military explosives and home-made explosives) from standoff distances of 1-10 meters with an interrogation area of 1 mm x 1 mm to 200 mm x 200 mm. This excitation wavelength provides resonant enhancement of many common explosives, no background fluorescence, and an enhanced cross-section due to the inverse wavelength scaling of Raman scattering. A coded-aperture spectrograph combined with compressive imaging algorithms will allow for wide-area interrogation with fast acquisition rates. Coded-aperture spectral imaging exploits the compressibility of hyperspectral data-cubes to greatly reduce the amount of acquired data needed to interrogate an area. The resultant systems are able to cover wider areas much faster than traditional push-broom and tunable filter systems. The full system design will be presented along with initial data from the instrument. Estimates for area scanning rates and chemical sensitivity will be presented. The system components include a solid-state deep-UV laser operating at 228 nm, a spectrograph consisting of well-corrected refractive imaging optics and a reflective grating, an intensified solar-blind CCD camera, and a high-efficiency collection optic.
Sparse-Coding-Based Computed Tomography Image Reconstruction
Yoon, Gang-Joon
2013-01-01
Computed tomography (CT) is a popular type of medical imaging that generates images of the internal structure of an object based on projection scans of the object from several angles. There are numerous methods to reconstruct the original shape of the target object from scans, but they are still dependent on the number of angles and iterations. To overcome the drawbacks of iterative reconstruction approaches like the algebraic reconstruction technique (ART), while the recovery is slightly impacted from a random noise (small amount of ℓ2 norm error) and projection scans (small amount of ℓ1 norm error) as well, we propose a medical image reconstruction methodology using the properties of sparse coding. It is a very powerful matrix factorization method which each pixel point is represented as a linear combination of a small number of basis vectors. PMID:23576898
Two-Sided Coded Aperture Imaging Without a Detector Plane
Ziock, Klaus-Peter; Cunningham, Mark F; Fabris, Lorenzo
2009-01-01
We introduce a novel design for a two-sided, coded-aperture, gamma-ray imager suitable for use in stand off detection of orphan radioactive sources. The design is an extension of an active-mask imager that would have three active planes of detector material, a central plane acting as the detector for two (active) coded-aperture mask planes, one on either side of the detector plane. In the new design the central plane is removed and the mask on the left (right) serves as the detector plane for the mask on the right (left). This design reduces the size, mass, complexity, and cost of the overall instrument. In addition, if one has fully position-sensitive detectors, then one can use the two planes as a classic Compton camera. This enhances the instrument's sensitivity at higher energies where the coded-aperture efficiency is decreased by mask penetration. A plausible design for the system is found and explored with Monte Carlo simulations.
Block-based conditional entropy coding for medical image compression
NASA Astrophysics Data System (ADS)
Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng
2003-05-01
In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.
Iterative image coding with overcomplete complex wavelet transforms
NASA Astrophysics Data System (ADS)
Kingsbury, Nick G.; Reeves, Tanya
2003-06-01
Overcomplete transforms, such as the Dual-Tree Complex Wavelet Transform, can offer more flexible signal representations than critically-sampled transforms such as the Discrete Wavelet Transform. However the process of selecting the optimal set of coefficients to code is much more difficult because many different sets of transform coefficients can represent the same decoded image. We show that large numbers of transform coefficients can be set to zero without much reconstruction quality loss by forcing compensatory changes in the remaining coefficients. We develop a system for achieving these coding aims of coefficient elimination and compensation, based on iterative projection of signals between the image domain and transform domain with a non-linear process (e.g.~centre-clipping or quantization) applied in the transform domain. The convergence properties of such non-linear feedback loops are discussed and several types of non-linearity are proposed and analyzed. The compression performance of the overcomplete scheme is compared with that of the standard Discrete Wavelet Transform, both objectively and subjectively, and is found to offer advantages of up to 0.65 dB in PSNR and significant reduction in visibility of some types of coding artifacts.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert
1998-01-01
MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
Some Important Observations Concerning Human Visual Image Coding
NASA Astrophysics Data System (ADS)
Overington, Ian
1986-05-01
During some 20 years of research into thresholds of visual performance we have required to explore deeply the developing knowledge in both physiology, neurophysiology and, to a lesser extent, anatomy of primate vision. Over the last few years, as interest in computer vision has grown, it has become clear to us that a number of aspects of image processing and coding by human vision are very simple yet powerful, but appear to have been largely overlooked or misrepresented in classical computer vision literature. The paper discusses some important aspects of early visual processing. It then endeavours to demonstrate some of the simple yet powerful coding procedures which we believe are or may be used by human vision and which may be applied directly to computer vision.
An adaptive algorithm for motion compensated color image coding
NASA Technical Reports Server (NTRS)
Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming
1987-01-01
This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.
Fractals in art and nature: why do we like them?
NASA Astrophysics Data System (ADS)
Spehar, Branka; Taylor, Richard P.
2013-03-01
Fractals have experienced considerable success in quantifying the visual complexity exhibited by many natural patterns, and continue to capture the imagination of scientists and artists alike. Fractal patterns have also been noted for their aesthetic appeal, a suggestion further reinforced by the discovery that the poured patterns of the American abstract painter Jackson Pollock are also fractal, together with the findings that many forms of art resemble natural scenes in showing scale-invariant, fractal-like properties. While some have suggested that fractal-like patterns are inherently pleasing because they resemble natural patterns and scenes, the relation between the visual characteristics of fractals and their aesthetic appeal remains unclear. Motivated by our previous findings that humans display a consistent preference for a certain range of fractal dimension across fractal images of various types we turn to scale-specific processing of visual information to understand this relationship. Whereas our previous preference studies focused on fractal images consisting of black shapes on white backgrounds, here we extend our investigations to include grayscale images in which the intensity variations exhibit scale invariance. This scale-invariance is generated using a 1/f frequency distribution and can be tuned by varying the slope of the rotationally averaged Fourier amplitude spectrum. Thresholding the intensity of these images generates black and white fractals with equivalent scaling properties to the original grayscale images, allowing a direct comparison of preferences for grayscale and black and white fractals. We found no significant differences in preferences between the two groups of fractals. For both set of images, the visual preference peaked for images with the amplitude spectrum slopes from 1.25 to 1.5, thus confirming and extending the previously observed relationship between fractal characteristics of images and visual preference.
HD Photo: a new image coding technology for digital photography
NASA Astrophysics Data System (ADS)
Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.
2007-09-01
This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.
Block-based embedded color image and video coding
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Pearlman, William A.; Islam, Asad
2004-01-01
Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.
Simulation model of the fractal patterns in ionic conducting polymer films
NASA Astrophysics Data System (ADS)
Amir, Shahizat; Mohamed, Nor; Hashim Ali, Siti
2010-02-01
Normally polymer electrolyte membranes are prepared and studied for applications in electrochemical devices. In this work, polymer electrolyte membranes have been used as the media to culture fractals. In order to simulate the growth patterns and stages of the fractals, a model has been identified based on the Brownian motion theory. A computer coding has been developed for the model to simulate and visualize the fractal growth. This computer program has been successful in simulating the growth of the fractal and in calculating the fractal dimension of each of the simulated fractal patterns. The fractal dimensions of the simulated fractals are comparable with the values obtained in the original fractals observed in the polymer electrolyte membrane. This indicates that the model developed in the present work is within acceptable conformity with the original fractal.
Coded-aperture Compton camera for gamma-ray imaging
NASA Astrophysics Data System (ADS)
Farber, Aaron M.
This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.
Coded aperture imaging with self-supporting uniformly redundant arrays
Fenimore, Edward E.
1983-01-01
A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.
Recursive time-varying filter banks for subband image coding
NASA Technical Reports Server (NTRS)
Smith, Mark J. T.; Chung, Wilson C.
1992-01-01
Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.
Phase transfer function based method to alleviate image artifacts in wavefront coding imaging system
NASA Astrophysics Data System (ADS)
Mo, Xutao; Wang, Jinjiang
2013-09-01
Wavefront coding technique can extend the depth of filed (DOF) of the incoherent imaging system. Several rectangular separable phase masks (such as cubic type, exponential type, logarithmic type, sinusoidal type, rational type, et al) have been proposed and discussed, because they can extend the DOF up to ten times of the DOF of ordinary imaging system. But according to the research on them, researchers have pointed out that the images are damaged by the artifacts, which usually come from the non-linear phase transfer function (PTF) differences between the PTF used in the image restoration filter and the PTF related to real imaging condition. In order to alleviate the image artifacts in imaging systems with wavefront coding, an optimization model based on the PTF was proposed to make the PTF invariance with the defocus. Thereafter, an image restoration filter based on the average PTF in the designed depth of field was introduced along with the PTF-based optimization. The combination of the optimization and the image restoration proposed can alleviate the artifacts, which was confirmed by the imaging simulation of spoke target. The cubic phase mask (CPM) and exponential phase mask (EPM) were discussed as example.
Categorizing biomedicine images using novel image features and sparse coding representation
2013-01-01
Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are
A segmentation-based lossless image coding method for high-resolution medical image compression.
Shen, L; Rangayyan, R M
1997-06-01
Lossless compression techniques are essential in archival and communication of medical images. In this paper, a new segmentation-based lossless image coding (SLIC) method is proposed, which is based on a simple but efficient region growing procedure. The embedded region growing procedure produces an adaptive scanning pattern for the image with the help of a very-few-bits-needed discontinuity index map. Along with this scanning pattern, an error image data part with a very small dynamic range is generated. Both the error image data and the discontinuity index map data parts are then encoded by the Joint Bi-level Image experts Group (JBIG) method. The SLIC method resulted in, on the average, lossless compression to about 1.6 h/pixel from 8 b, and to about 2.9 h/pixel from 10 b with a database of ten high-resolution digitized chest and breast images. In comparison with direct coding by JBIG, Joint Photographic Experts Group (JPEG), hierarchical interpolation (HINT), and two-dimensional Burg Prediction plus Huffman error coding methods, the SLIC method performed better by 4% to 28% on the database used. PMID:9184892
Chaos, Fractals, and Polynomials.
ERIC Educational Resources Information Center
Tylee, J. Louis; Tylee, Thomas B.
1996-01-01
Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)
Predictive coding of depth images across multiple views
NASA Astrophysics Data System (ADS)
Morvan, Yannick; Farin, Dirk; de With, Peter H. N.
2007-02-01
A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictivecoding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 dB as compared to H.264 compression.
Biometric iris image acquisition system with wavefront coding technology
NASA Astrophysics Data System (ADS)
Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao
2013-09-01
Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code
ERIC Educational Resources Information Center
Fraboni, Michael; Moller, Trisha
2008-01-01
Fractal geometry offers teachers great flexibility: It can be adapted to the level of the audience or to time constraints. Although easily explained, fractal geometry leads to rich and interesting mathematical complexities. In this article, the authors describe fractal geometry, explain the process of iteration, and provide a sample exercise.…
Imaging in scattering media using correlation image sensors and sparse convolutional coding.
Heide, Felix; Xiao, Lei; Kolb, Andreas; Hullin, Matthias B; Heidrich, Wolfgang
2014-10-20
Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity. PMID:25401666
Hexagonal uniformly redundant arrays for coded-aperture imaging
NASA Technical Reports Server (NTRS)
Finger, M. H.; Prince, T. A.
1985-01-01
Uniformly redundant arrays are used in coded-aperture imaging, a technique for forming images without mirrors or lenses. The URAs constructed on hexagonal lattices, are outlined. Details are presented for the construction of a special class of URAs, the skew-Hadamard URAs, which have the following properties: (1) nearly half open and half closed (2) antisymmetric upon rotation by 180 deg except for the central cell and its repetitions. Some of the skew-Hadamard URAs constructed on a hexagonal lattice have additional symmetries. These special URAs that have a hexagonal unit pattern, and are antisymmetric upon rotation by 60 deg, called hexagonal uniformly redundant arrays (HURAs). The HURAs are particularly suited to gamma-ray imaging in high background situations. In a high background situation the best sensitivity is obtained with a half open and half closed mask. The hexagonal symmetry of an HURA is more appropriate for a round position-sensitive detector or a closed-packed array of detectors than a rectangular symmetry.
Fractal-based wideband invisibility cloak
NASA Astrophysics Data System (ADS)
Cohen, Nathan; Okoro, Obinna; Earle, Dan; Salkind, Phil; Unger, Barry; Yen, Sean; McHugh, Daniel; Polterzycki, Stefan; Shelman-Cohen, A. J.
2015-03-01
A wideband invisibility cloak (IC) at microwave frequencies is described. Using fractal resonators in closely spaced (sub wavelength) arrays as a minimal number of cylindrical layers (rings), the IC demonstrates that it is physically possible to attain a `see through' cloaking device with: (a) wideband coverage; (b) simple and attainable fabrication; (c) high fidelity emulation of the free path; (d) minimal side scattering; (d) a near absence of shadowing in the scattering. Although not a practical device, this fractal-enabled technology demonstrator opens up new opportunities for diverted-image (DI) technology and use of fractals in wideband optical, infrared, and microwave applications.
Characterizing Hyperspectral Imagery (AVIRIS) Using Fractal Technique
NASA Technical Reports Server (NTRS)
Qiu, Hong-Lie; Lam, Nina Siu-Ngan; Quattrochi, Dale
1997-01-01
With the rapid increase in hyperspectral data acquired by various experimental hyperspectral imaging sensors, it is necessary to develop efficient and innovative tools to handle and analyze these data. The objective of this study is to seek effective spatial analytical tools for summarizing the spatial patterns of hyperspectral imaging data. In this paper, we (1) examine how fractal dimension D changes across spectral bands of hyperspectral imaging data and (2) determine the relationships between fractal dimension and image content. It has been documented that fractal dimension changes across spectral bands for the Landsat-TM data and its value [(D)] is largely a function of the complexity of the landscape under study. The newly available hyperspectral imaging data such as that from the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) which has 224 bands, covers a wider spectral range with a much finer spectral resolution. Our preliminary result shows that fractal dimension values of AVIRIS scenes from the Santa Monica Mountains in California vary between 2.25 and 2.99. However, high fractal dimension values (D > 2.8) are found only from spectral bands with high noise level and bands with good image quality have a fairly stable dimension value (D = 2.5 - 2.6). This suggests that D can also be used as a summary statistics to represent the image quality or content of spectral bands.
Adaptation of a neutron diffraction detector to coded aperture imaging
Vanier, P.E.; Forman, L.
1997-02-01
A coded aperture neutron imaging system developed at Brookhaven National Laboratory (BNL) has demonstrated that it is possible to record not only a flux of thermal neutrons at some position, but also the directions from whence they came. This realization of an idea which defied the conventional wisdom has provided a device which has never before been available to the nuclear physics community. A number of potential applications have been explored, including (1) counting warheads on a bus or in a storage area, (2) investigating inhomogeneities in drums of Pu-containing waste to facilitate non-destructive assays, (3) monitoring of vaults containing accountable materials, (4) detection of buried land mines, and (5) locating solid deposits of nuclear material held up in gaseous diffusion plants.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Spatial super-resolution in code aperture spectral imaging
NASA Astrophysics Data System (ADS)
Arguello, Henry; Rueda, Hoover F.; Arce, Gonzalo R.
2012-06-01
The Code Aperture Snapshot Spectral Imaging system (CASSI) senses the spectral information of a scene using the underlying concepts of compressive sensing (CS). The random projections in CASSI are localized such that each measurement contains spectral information only from a small spatial region of the data cube. The goal of this paper is to translate high-resolution hyperspectral scenes into compressed signals measured by a low-resolution detector. Spatial super-resolution is attained as an inverse problem from a set of low-resolution coded measurements. The proposed system not only offers significant savings in size, weight and power, but also in cost as low resolution detectors can be used. The proposed system can be efficiently exploited in the IR region where the cost of detectors increases rapidly with resolution. The simulations of the proposed system show an improvement of up to 4 dB in PSNR. Results also show that the PSNR of the reconstructed data cubes approach the PSNR of the reconstructed data cubes attained with high-resolution detectors, at the cost of using additional measurements.
NASA Astrophysics Data System (ADS)
Tozian, Cynthia
A coded aperture1 plate was employed on a conventional gamma camera for 3D single photon emission computed tomography (SPECT) imaging on small animal models. The coded aperture design was selected to improve the spatial resolution and decrease the minimum detectable activity (MDA) required to image plaque formation in the APoE (apolipoprotein E) gene deficient mouse model when compared to conventional SPECT techniques. The pattern that was tested was a no-two-holes-touching (NTHT) modified uniformly redundant array (MURA) having 1,920 pinholes. The number of pinholes combined with the thin sintered tungsten plate was designed to increase the efficiency of the imaging modality over conventional gamma camera imaging methods while improving spatial resolution and reducing noise in the image reconstruction. The MDA required to image the vulnerable plaque in a human cardiac-torso mathematical phantom was simulated with a Monte Carlo code and evaluated to determine the optimum plate thickness by a receiver operating characteristic (ROC) yielding the lowest possible MDA and highest area under the curve (AUC). A partial 3D expectation maximization (EM) reconstruction was developed to improve signal-to-noise ratio (SNR), dynamic range, and spatial resolution over the linear correlation method of reconstruction. This improvement was evaluated by imaging a mini hot rod phantom, simulating the dynamic range, and by performing a bone scan of the C-57 control mouse. Results of the experimental and simulated data as well as other plate designs were analyzed for use as a small animal and potentially human cardiac imaging modality for a radiopharmaceutical developed at Bristol-Myers Squibb Medical Imaging Company, North Billerica, MA, for diagnosing vulnerable plaques. If left untreated, these plaques may rupture causing sudden, unexpected coronary occlusion and death. The results of this research indicated that imaging and reconstructing with this new partial 3D algorithm improved
Multiplexing of encrypted data using fractal masks.
Barrera, John F; Tebaldi, Myrian; Amaya, Dafne; Furlan, Walter D; Monsoriu, Juan A; Bolognini, Néstor; Torroba, Roberto
2012-07-15
In this Letter, we present to the best of our knowledge a new all-optical technique for multiple-image encryption and multiplexing, based on fractal encrypting masks. The optical architecture is a joint transform correlator. The multiplexed encrypted data are stored in a photorefractive crystal. The fractal parameters of the key can be easily tuned to lead to a multiplexing operation without cross talk effects. Experimental results that support the potential of the method are presented. PMID:22825170