Chaos-based encryption for fractal image coding
NASA Astrophysics Data System (ADS)
Yuen, Ching-Hung; Wong, Kwok-Wo
2012-01-01
A chaos-based cryptosystem for fractal image coding is proposed. The Rényi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fractal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Rényi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.
NASA Technical Reports Server (NTRS)
Barnsley, Michael F.; Sloan, Alan D.
1989-01-01
Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.
Fractal images induce fractal pupil dilations and constrictions.
Moon, P; Muday, J; Raynor, S; Schirillo, J; Boydston, C; Fairbanks, M S; Taylor, R P
2014-09-01
Fractals are self-similar structures or patterns that repeat at increasingly fine magnifications. Research has revealed fractal patterns in many natural and physiological processes. This article investigates pupillary size over time to determine if their oscillations demonstrate a fractal pattern. We predict that pupil size over time will fluctuate in a fractal manner and this may be due to either the fractal neuronal structure or fractal properties of the image viewed. We present evidence that low complexity fractal patterns underlie pupillary oscillations as subjects view spatial fractal patterns. We also present evidence implicating the autonomic nervous system's importance in these patterns. Using the variational method of the box-counting procedure we demonstrate that low complexity fractal patterns are found in changes within pupil size over time in millimeters (mm) and our data suggest that these pupillary oscillation patterns do not depend on the fractal properties of the image viewed. PMID:24978815
ERIC Educational Resources Information Center
Jurgens, Hartmut; And Others
1990-01-01
The production and application of images based on fractal geometry are described. Discussed are fractal language groups, fractal image coding, and fractal dialects. Implications for these applications of geometry to mathematics education are suggested. (CW)
Chang, H T; Kuo, C J
1998-03-10
An optical parallel architecture for the random-iteration algorithm to decode a fractal image by use of iterated-function system (IFS) codes is proposed. The code value is first converted into transmittance in film or a spatial light modulator in the optical part of the system. With an optical-to-electrical converter, electrical-to-optical converter, and some electronic circuits for addition and delay, we can perform the contractive affine transformation (CAT) denoted in IFS codes. In the proposed decoding architecture all CAT's generate points (image pixels) in parallel, and these points then are joined for display purposes. Therefore the decoding speed is improved greatly compared with existing serial-decoding architectures. In addition, an error and stability analysis that considers nonperfect elements is presented for the proposed optical system. Finally, simulation results are given to validate the proposed architecture. PMID:18268718
A Novel Fractal Coding Method Based on M-J Sets
Sun, Yuanyuan; Xu, Rudan; Chen, Lina; Kong, Ruiqing; Hu, Xiaopeng
2014-01-01
In this paper, we present a novel fractal coding method with the block classification scheme based on a shared domain block pool. In our method, the domain block pool is called dictionary and is constructed from fractal Julia sets. The image is encoded by searching the best matching domain block with the same BTC (Block Truncation Coding) value in the dictionary. The experimental results show that the scheme is competent both in encoding speed and in reconstruction quality. Particularly for large images, the proposed method can avoid excessive growth of the computational complexity compared with the traditional fractal coding algorithm. PMID:25010686
ERIC Educational Resources Information Center
Osler, Thomas J.
1999-01-01
Because fractal images are by nature very complex, it can be inspiring and instructive to create the code in the classroom and watch the fractal image evolve as the user slowly changes some important parameter or zooms in and out of the image. Uses programming language that permits the user to store and retrieve a graphics image as a disk file.…
Multispectral image fusion based on fractal features
NASA Astrophysics Data System (ADS)
Tian, Jie; Chen, Jie; Zhang, Chunhua
2004-01-01
Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the
Pyramidal fractal dimension for high resolution images
NASA Astrophysics Data System (ADS)
Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2016-07-01
Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024 ×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images.
Pyramidal fractal dimension for high resolution images.
Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2016-07-01
Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images. PMID:27475069
Fractality of pulsatile flow in speckle images
NASA Astrophysics Data System (ADS)
Nemati, M.; Kenjeres, S.; Urbach, H. P.; Bhattacharya, N.
2016-05-01
The scattering of coherent light from a system with underlying flow can be used to yield essential information about dynamics of the process. In the case of pulsatile flow, there is a rapid change in the properties of the speckle images. This can be studied using the standard laser speckle contrast and also the fractality of images. In this paper, we report the results of experiments performed to study pulsatile flow with speckle images, under different experimental configurations to verify the robustness of the techniques for applications. In order to study flow under various levels of complexity, the measurements were done for three in-vitro phantoms and two in-vivo situations. The pumping mechanisms were varied ranging from mechanical pumps to the human heart for the in vivo case. The speckle images were analyzed using the techniques of fractal dimension and speckle contrast analysis. The results of these techniques for the various experimental scenarios were compared. The fractal dimension is a more sensitive measure to capture the complexity of the signal though it was observed that it is also extremely sensitive to the properties of the scattering medium and cannot recover the signal for thicker diffusers in comparison to speckle contrast.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm. PMID:26890900
Estimating fractal dimension of medical images
NASA Astrophysics Data System (ADS)
Penn, Alan I.; Loew, Murray H.
1996-04-01
Box counting (BC) is widely used to estimate the fractal dimension (fd) of medical images on the basis of a finite set of pixel data. The fd is then used as a feature to discriminate between healthy and unhealthy conditions. We show that BC is ineffective when used on small data sets and give examples of published studies in which researchers have obtained contradictory and flawed results by using BC to estimate the fd of data-limited medical images. We present a new method for estimating fd of data-limited medical images. In the new method, fractal interpolation functions (FIFs) are used to generate self-affine models of the underlying image; each model, upon discretization, approximates the original data points. The fd of each FIF is analytically evaluated. The mean of the fds of the FIFs is the estimate of the fd of the original data. The standard deviation of the fds of the FIFs is a confidence measure of the estimate. The goodness-of-fit of the discretized models to the original data is a measure of self-affinity of the original data. In a test case, the new method generated a stable estimate of fd of a rib edge in a standard chest x-ray; box counting failed to generate a meaningful estimate of the same image.
Fractal image compression: A resolution independent representation for imagery
NASA Technical Reports Server (NTRS)
Sloan, Alan D.
1993-01-01
A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Smitha, K A; Gupta, A K; Jayasree, R S
2015-09-01
Glioma, the heterogeneous tumors originating from glial cells, generally exhibit varied grades and are difficult to differentiate using conventional MR imaging techniques. When this differentiation is crucial in the disease prognosis and treatment, even the advanced MR imaging techniques fail to provide a higher discriminative power for the differentiation of malignant tumor from benign ones. A powerful image processing technique applied to the imaging techniques is expected to provide a better differentiation. The present study focuses on the fractal analysis of fluid attenuation inversion recovery MR images, for the differentiation of glioma. For this, we have considered the most important parameters of fractal analysis, fractal dimension and lacunarity. While fractal analysis assesses the malignancy and complexity of a fractal object, lacunarity gives an indication on the empty space and the degree of inhomogeneity in the fractal objects. Box counting method with the preprocessing steps namely binarization, dilation and outlining was used to obtain the fractal dimension and lacunarity in glioma. Statistical analysis such as one-way analysis of variance and receiver operating characteristic (ROC) curve analysis helped to compare the mean and to find discriminative sensitivity of the results. It was found that the lacunarity of low and high grade gliomas vary significantly. ROC curve analysis between low and high grade glioma for fractal dimension and lacunarity yielded 70.3% sensitivity and 66.7% specificity and 70.3% sensitivity and 88.9% specificity, respectively. The study observes that fractal dimension and lacunarity increases with an increase in the grade of glioma and lacunarity is helpful in identifying most malignant grades. PMID:26305773
Kepler mission exoplanet transit data analysis using fractal imaging
NASA Astrophysics Data System (ADS)
Dehipawala, S.; Tremberger, G.; Majid, Y.; Holden, T.; Lieberman, D.; Cheung, T.
2012-10-01
The Kepler mission is designed to survey a fist-sized patch of the sky within the Milky Way galaxy for the discovery of exoplanets, with emphasis on near Earth-size exoplanets in or near the habitable zone. The Kepler space telescope would detect the brightness fluctuation of a host star and extract periodic dimming in the lightcurve caused by exoplanets that cross in front of their host star. The photometric data of a host star could be interpreted as an image where fractal imaging would be applicable. Fractal analysis could elucidate the incomplete data limitation posed by the data integration window. The fractal dimension difference between the lower and upper halves of the image could be used to identify anomalies associated with transits and stellar activity as the buried signals are expected to be in the lower half of such an image. Using an image fractal dimension resolution of 0.04 and defining the whole image fractal dimension as the Chi-square expected value of the fractal dimension, a p-value can be computed and used to establish a numerical threshold for decision making that may be useful in further studies of lightcurves of stars with candidate exoplanets. Similar fractal dimension difference approaches would be applicable to the study of photometric time series data via the Higuchi method. The correlated randomness of the brightness data series could be used to support inferences based on image fractal dimension differences. Fractal compression techniques could be used to transform a lightcurve image, resulting in a new image with a new fractal dimension value, but this method has been found to be ineffective for images with high information capacity. The three studied criteria could be used together to further constrain the Kepler list of candidate lightcurves of stars with possible exoplanets that may be planned for ground-based telescope confirmation.
Fractal dimension of cerebral surfaces using magnetic resonance images
Majumdar, S.; Prasad, R.R.
1988-11-01
The calculation of the fractal dimension of the surface bounded by the grey matter in the normal human brain using axial, sagittal, and coronal cross-sectional magnetic resonance (MR) images is presented. The fractal dimension in this case is a measure of the convolutedness of this cerebral surface. It is proposed that the fractal dimension, a feature that may be extracted from MR images, may potentially be used for image analysis, quantitative tissue characterization, and as a feature to monitor and identify cerebral abnormalities and developmental changes.
NASA Astrophysics Data System (ADS)
Tremberger, George, Jr.; Flamholz, A.; Cheung, E.; Sullivan, R.; Subramaniam, R.; Schneider, P.; Brathwaite, G.; Boteju, J.; Marchese, P.; Lieberman, D.; Cheung, T.; Holden, Todd
2007-09-01
The absorption effect of the back surface boundary of a diffuse layer was studied via laser generated reflection speckle pattern. The spatial speckle intensity provided by a laser beam was measured. The speckle data were analyzed in terms of fractal dimension (computed by NIH ImageJ software via the box counting fractal method) and weak localization theory based on Mie scattering. Bar code imaging was modeled as binary absorption contrast and scanning resolution in millimeter range was achieved for diffusive layers up to thirty transport mean free path thick. Samples included alumina, porous glass and chicken tissue. Computer simulation was used to study the effect of speckle spatial distribution and observed fractal dimension differences were ascribed to variance controlled speckle sizes. Fractal dimension suppressions were observed in samples that had thickness dimensions around ten transport mean free path. Computer simulation suggested a maximum fractal dimension of about 2 and that subtracting information could lower fractal dimension. The fractal dimension was shown to be sensitive to sample thickness up to about fifteen transport mean free paths, and embedded objects which modified 20% or more of the effective thickness was shown to be detectable. The box counting fractal method was supplemented with the Higuchi data series fractal method and application to architectural distortion mammograms was demonstrated. The use of fractals in diffusive analysis would provide a simple language for a dialog between optics experts and mammography radiologists, facilitating the applications of laser diagnostics in tissues.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.
1998-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and
Analysis on correlation imaging based on fractal interpolation
NASA Astrophysics Data System (ADS)
Li, Bailing; Zhang, Wenwen; Chen, Qian; Gu, Guohua
2015-10-01
One fractal interpolation algorithm has been discussed in detail and the statistical self-similarity characteristics of light field have been analized in correlated experiment. For the correlation imaging experiment in condition of low sampling frequent, an image analysis approach based on fractal interpolation algorithm is proposed. This approach aims to improve the resolution of original image which contains a fewer number of pixels and highlight the image contour feature which is fuzzy. By using this method, a new model for the light field has been established. For the case of different moments of the intensity in the receiving plane, the local field division also has been established and then the iterated function system based on the experimental data set can be obtained by choosing the appropriate compression ratio under a scientific error estimate. On the basis of the iterative function, an explicit fractal interpolation function expression is given out in this paper. The simulation results show that the correlation image reconstructed by fractal interpolation has good approximations to the original image. The number of pixels of image after interpolation is significantly increased. This method will effectively solve the difficulty of image pixel deficiency and significantly improved the outline of objects in the image. The rate of deviation as the parameter has been adopted in the paper in order to evaluate objectively the effect of the algorithm. To sum up, fractal interpolation method proposed in this paper not only keeps the overall image but also increases the local information of the original image.
Blind Detection of Region Duplication Forgery Using Fractal Coding and Feature Matching.
Jenadeleh, Mohsen; Ebrahimi Moghaddam, Mohsen
2016-05-01
Digital image forgery detection is important because of its wide use in applications such as medical diagnosis, legal investigations, and entertainment. Copy-move forgery is one of the famous techniques, which is used in region duplication. Many of the existing copy-move detection algorithms cannot effectively blind detect duplicated regions that are made by powerful image manipulation software like Photoshop. In this study, a new method is proposed for blind detecting manipulations in digital images based on modified fractal coding and feature vector matching. The proposed method not only detects typical copy-move forgery, but also finds multiple copied forgery regions for images that are subjected to rotation, scaling, reflection, and a mixture of these postprocessing operations. The proposed method is robust against tampered images undergoing attacks such as Gaussian blurring, contrast scaling, and brightness adjustment. The experimental results demonstrated the validity and efficiency of the method. PMID:27122398
Evaluation of Two Fractal Methods for Magnetogram Image Analysis
NASA Technical Reports Server (NTRS)
Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.
1997-01-01
Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.
Liver ultrasound image classification by using fractal dimension of edge
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita
2012-08-01
Medical ultrasound image edge detection is an important component in increasing the number of application of segmentation, and hence it has been subject of many studies in the literature. In this study, we have classified the liver ultrasound images (US) combining Canny and Sobel edge detectors with fractal analysis in order to provide an indicator about of the US images roughness. We intend to provide a classification rule of the focal liver lesions as: cirrhotic liver, liver hemangioma and healthy liver. For edges detection the Canny and Sobel operators were used. Fractal analyses have been applied for texture analysis and classification of focal liver lesions according to fractal dimension (FD) determined by using the Box Counting method. To assess the performance and accuracy rate of the proposed method the contrast-to-noise (CNR) is analyzed.
Wideband Fractal Antennas for Holographic Imaging and Rectenna Applications
Bunch, Kyle J.; McMakin, Douglas L.; Sheen, David M.
2008-04-18
At Pacific Northwest National Laboratory, wideband antenna arrays have been successfully used to reconstruct three-dimensional images at microwave and millimeter-wave frequencies. Applications of this technology have included portal monitoring, through-wall imaging, and weapons detection. Fractal antennas have been shown to have wideband characteristics due to their self-similar nature (that is, their geometry is replicated at different scales). They further have advantages in providing good characteristics in a compact configuration. We discuss the application of fractal antennas for holographic imaging. Simulation results will be presented. Rectennas are a specific class of antennas in which a received signal drives a nonlinear junction and is retransmitted at either a harmonic frequency or a demodulated frequency. Applications include tagging and tracking objects with a uniquely-responding antenna. It is of interest to consider fractal rectenna because the self-similarity of fractal antennas tends to make them have similar resonance behavior at multiples of the primary resonance. Thus, fractal antennas can be suited for applications in which a signal is reradiated at a harmonic frequency. Simulations will be discussed with this application in mind.
Wideband fractal antennas for holographic imaging and rectenna applications
NASA Astrophysics Data System (ADS)
Bunch, Kyle J.; McMakin, Douglas L.; Sheen, David M.
2008-04-01
At Pacific Northwest National Laboratory, wideband antenna arrays have been successfully used to reconstruct three-dimensional images at microwave and millimeter-wave frequencies. Applications of this technology have included portal monitoring, through-wall imaging, and weapons detection. Fractal antennas have been shown to have wideband characteristics due to their self-similar nature (that is, their geometry is replicated at different scales). They further have advantages in providing good characteristics in a compact configuration. We discuss the application of fractal antennas for holographic imaging. Simulation results will be presented. Rectennas are a specific class of antennas in which a received signal drives a nonlinear junction and is retransmitted at either a harmonic frequency or a demodulated frequency. Applications include tagging and tracking objects with a uniquely-responding antenna. It is of interest to consider fractal rectenna because the self-similarity of fractal antennas tends to make them have similar resonance behavior at multiples of the primary resonance. Thus, fractal antennas can be suited for applications in which a signal is reradiated at a harmonic frequency. Simulations will be discussed with this application in mind.
Fractal image perception provides novel insights into hierarchical cognition.
Martins, M J; Fischmeister, F P; Puig-Waldmüller, E; Oh, J; Geissler, A; Robinson, S; Fitch, W T; Beisteiner, R
2014-08-01
Hierarchical structures play a central role in many aspects of human cognition, prominently including both language and music. In this study we addressed hierarchy in the visual domain, using a novel paradigm based on fractal images. Fractals are self-similar patterns generated by repeating the same simple rule at multiple hierarchical levels. Our hypothesis was that the brain uses different resources for processing hierarchies depending on whether it applies a "fractal" or a "non-fractal" cognitive strategy. We analyzed the neural circuits activated by these complex hierarchical patterns in an event-related fMRI study of 40 healthy subjects. Brain activation was compared across three different tasks: a similarity task, and two hierarchical tasks in which subjects were asked to recognize the repetition of a rule operating transformations either within an existing hierarchical level, or generating new hierarchical levels. Similar hierarchical images were generated by both rules and target images were identical. We found that when processing visual hierarchies, engagement in both hierarchical tasks activated the visual dorsal stream (occipito-parietal cortex, intraparietal sulcus and dorsolateral prefrontal cortex). In addition, the level-generating task specifically activated circuits related to the integration of spatial and categorical information, and with the integration of items in contexts (posterior cingulate cortex, retrosplenial cortex, and medial, ventral and anterior regions of temporal cortex). These findings provide interesting new clues about the cognitive mechanisms involved in the generation of new hierarchical levels as required for fractals. PMID:24699014
Fractal image analysis - Application to the topography of Oregon and synthetic images.
NASA Technical Reports Server (NTRS)
Huang, Jie; Turcotte, Donald L.
1990-01-01
Digitized topography for the state of Oregon has been used to obtain maps of fractal dimension and roughness amplitude. The roughness amplitude correlates well with variations in relief and is a promising parameter for the quantitative classification of landforms. The spatial variations in fractal dimension are low and show no clear correlation with different tectonic settings. For Oregon the mean fractal dimension from a two-dimensional spectral analysis is D = 2.586, and for a one-dimensional spectral analysis the mean fractal dimension is D = 1.487, which is close to the Brown noise value D = 1.5. Synthetic two-dimensional images have also been generated for a range of D values. For D = 2.6, the synthetic image has a mean one-dimensional spectral fractal dimension D = 1.58, which is consistent with the results for Oregon. This approach can be easily applied to any digitzed image that obeys fractal statistics.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Quattrochi, Dale A.; Luvall, Jeffrey C.
1997-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely scale-independent. Self-similarity is a property of curves or surfaces where each part is indistinguishable from the whole. The fractal dimension D of remote sensing data yields quantitative insight on the spatial complexity and information content contained within these data. Analyses of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed(l0 to 80 meters). The forested scene behaves as one would expect-larger pixel sizes decrease the complexity of the image as individual clumps of trees are averaged into larger blocks. The increased complexity of the agricultural image with increasing pixel size results from the loss of homogeneous groups of pixels in the large fields to mixed pixels composed of varying combinations of NDVI values that correspond to roads and vegetation. The same process occur's in the urban image to some extent, but the lack of large, homogeneous areas in the high resolution NDVI image means the initially high D value is maintained as pixel size increases. The slope of the fractal dimension-resolution relationship provides indications of how image classification or feature identification will be affected by changes in sensor resolution.
Confocal coded aperture imaging
Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.
2001-01-01
A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.
Using fractal compression scheme to embed a digital signature into an image
NASA Astrophysics Data System (ADS)
Puate, Joan; Jordan, Frederic D.
1997-01-01
With the increase in the number of digital networks and recording devices, digital images appear to be a material, especially still images, whose ownership is widely threatened due to the availability of simple, rapid and perfect duplication and distribution means. It is in this context that several European projects are devoted to finding a technical solution which, as it applies to still images, introduces a code or watermark into the image data itself. This watermark should not only allow one to determine the owner of the image, but also respect its quality and be difficult to revoke. An additional requirement is that the code should be retrievable by the only mean of the protected information. In this paper, we propose a new scheme based on fractal coding and decoding. In general terms, a fractal coder exploits the spatial redundancy within the image by establishing a relationship between its different parts. We describe a way to use this relationship as a means of embedding a watermark. Tests have been performed in order to measure the robustness of the technique against JPEG conversion and low pass filtering. In both cases, very promising results have been obtained.
Fractal descriptors for discrimination of microscopy images of plant leaves
NASA Astrophysics Data System (ADS)
Silva, N. R.; Florindo, J. B.; Gómez, M. C.; Kolb, R. M.; Bruno, O. M.
2014-03-01
This study proposes the application of fractal descriptors method to the discrimination of microscopy images of plant leaves. Fractal descriptors have demonstrated to be a powerful discriminative method in image analysis, mainly for the discrimination of natural objects. In fact, these descriptors express the spatial arrangement of pixels inside the texture under different scales and such arrangements are directly related to physical properties inherent to the material depicted in the image. Here, we employ the Bouligand-Minkowski descriptors. These are obtained by the dilation of a surface mapping the gray-level texture. The classification of the microscopy images is performed by the well-known Support Vector Machine (SVM) method and we compare the success rate with other literature texture analysis methods. The proposed method achieved a correctness rate of 89%, while the second best solution, the Co-occurrence descriptors, yielded only 78%. This clear advantage of fractal descriptors demonstrates the potential of such approach in the analysis of the plant microscopy images.
Bingham, Philip R; Santos-Villalobos, Hector J
2011-01-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Multi-modal multi-fractal boundary encoding in object-based image compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2006-08-01
The compact representation of region boundary contours is key to efficient representation and compression of digital images using object-based compression (OBC). In OBC, regions are coded in terms of their texture, color, and shape. Given the appropriate representation scheme, high compression ratios (e.g., 500:1 <= CR <= 2,500:1) have been reported for selected images. Because a region boundary is often represented with more parameters than the region contents, it is crucial to maximize the boundary compression ratio by reducing these parameters. Researchers have elsewhere shown that cherished boundary encoding techniques such as chain coding, simplicial complexes, or quadtrees, to name but a few, are inadequate to support OBC within the aforementioned CR range. Several existing compression standards such as MPEG support efficient boundary representation, but do not necessarily support OBC at CR >= 500:1 . Siddiqui et al. exploited concepts from fractal geometry to encode and compress region boundaries based on fractal dimension, reporting CR = 286.6:1 in one test. However, Siddiqui's algorithm is costly and appears to contain ambiguities. In this paper, we first discuss fractal dimension and OBC compression ratio, then enhance Siddiqui's algorithm, achieving significantly higher CR for a wide variety of boundary types. In particular, our algorithm smoothes a region boundary B, then extracts its inflection or control points P, which are compactly represented. The fractal dimension D is computed locally for the detrended B. By appropriate subsampling, one efficiently segments disjoint clusters of D values subject to a preselected tolerance, thereby partitioning B into a multifractal. This is accomplished using four possible compression modes. In contrast, previous researchers have characterized boundary variance with one fractal dimension, thereby producing a monofractal. At its most complex, the compressed representation contains P, a spatial marker, and a D value
Advanced fractal approach for unsupervised classification of SAR images
NASA Astrophysics Data System (ADS)
Pant, Triloki; Singh, Dharmendra; Srivastava, Tanuja
2010-06-01
Unsupervised classification of Synthetic Aperture Radar (SAR) images is the alternative approach when no or minimum apriori information about the image is available. Therefore, an attempt has been made to develop an unsupervised classification scheme for SAR images based on textural information in present paper. For extraction of textural features two properties are used viz. fractal dimension D and Moran's I. Using these indices an algorithm is proposed for contextual classification of SAR images. The novelty of the algorithm is that it implements the textural information available in SAR image with the help of two texture measures viz. D and I. For estimation of D, the Two Dimensional Variation Method (2DVM) has been revised and implemented whose performance is compared with another method, i.e., Triangular Prism Surface Area Method (TPSAM). It is also necessary to check the classification accuracy for various window sizes and optimize the window size for best classification. This exercise has been carried out to know the effect of window size on classification accuracy. The algorithm is applied on four SAR images of Hardwar region, India and classification accuracy has been computed. A comparison of the proposed algorithm using both fractal dimension estimation methods with the K-Means algorithm is discussed. The maximum overall classification accuracy with K-Means comes to be 53.26% whereas overall classification accuracy with proposed algorithm is 66.16% for TPSAM and 61.26% for 2DVM.
Automatic classification for pathological prostate images based on fractal analysis.
Huang, Po-Whei; Lee, Cheng-Hsiung
2009-07-01
Accurate grading for prostatic carcinoma in pathological images is important to prognosis and treatment planning. Since human grading is always time-consuming and subjective, this paper presents a computer-aided system to automatically grade pathological images according to Gleason grading system which is the most widespread method for histological grading of prostate tissues. We proposed two feature extraction methods based on fractal dimension to analyze variations of intensity and texture complexity in regions of interest. Each image can be classified into an appropriate grade by using Bayesian, k-NN, and support vector machine (SVM) classifiers, respectively. Leave-one-out and k-fold cross-validation procedures were used to estimate the correct classification rates (CCR). Experimental results show that 91.2%, 93.7%, and 93.7% CCR can be achieved by Bayesian, k-NN, and SVM classifiers, respectively, for a set of 205 pathological prostate images. If our fractal-based feature set is optimized by the sequential floating forward selection method, the CCR can be promoted up to 94.6%, 94.2%, and 94.6%, respectively, using each of the above three classifiers. Experimental results also show that our feature set is better than the feature sets extracted from multiwavelets, Gabor filters, and gray-level co-occurrence matrix methods because it has a much smaller size and still keeps the most powerful discriminating capability in grading prostate images. PMID:19164082
Aliasing removing of hyperspectral image based on fractal structure matching
NASA Astrophysics Data System (ADS)
Wei, Ran; Zhang, Ye; Zhang, Junping
2015-05-01
Due to the richness on high frequency components, hyperspectral image (HSI) is more sensitive to distortion like aliasing. Many methods aiming at removing such distortion have been proposed. However, seldom of them are suitable to HSI, due to low spatial resolution characteristic of HSI. Fortunately, HSI contains plentiful spectral information, which can be exploited to overcome such difficulties. Motivated by this, we proposed an aliasing removing method for HSI. The major differences between proposed and current methods is that proposed algorithm is able to utilize fractal structure information, thus the dilemma originated from low-resolution of HSI is solved. Experiments on real HSI data demonstrated subjectively and objectively that proposed method can not only remove annoying visual effect brought by aliasing, but also recover more high frequency component.
Fractal analysis of scatter imaging signatures to distinguish breast pathologies
NASA Astrophysics Data System (ADS)
Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.
2013-02-01
Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.
Analysis of fractal dimensions of rat bones from film and digital images
NASA Technical Reports Server (NTRS)
Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Yamauchi, M.
2001-01-01
OBJECTIVES: (1) To compare the effect of two different intra-oral image receptors on estimates of fractal dimension; and (2) to determine the variations in fractal dimensions between the femur, tibia and humerus of the rat and between their proximal, middle and distal regions. METHODS: The left femur, tibia and humerus from 24 4-6-month-old Sprague-Dawley rats were radiographed using intra-oral film and a charge-coupled device (CCD). Films were digitized at a pixel density comparable to the CCD using a flat-bed scanner. Square regions of interest were selected from proximal, middle, and distal regions of each bone. Fractal dimensions were estimated from the slope of regression lines fitted to plots of log power against log spatial frequency. RESULTS: The fractal dimensions estimates from digitized films were significantly greater than those produced from the CCD (P=0.0008). Estimated fractal dimensions of three types of bone were not significantly different (P=0.0544); however, the three regions of bones were significantly different (P=0.0239). The fractal dimensions estimated from radiographs of the proximal and distal regions of the bones were lower than comparable estimates obtained from the middle region. CONCLUSIONS: Different types of image receptors significantly affect estimates of fractal dimension. There was no difference in the fractal dimensions of the different bones but the three regions differed significantly.
Intelligent fuzzy approach for fast fractal image compression
NASA Astrophysics Data System (ADS)
Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila
2014-12-01
Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.
Loh, N. Duane
2012-06-20
This deposition includes the aerosol diffraction images used for phasing, fractal morphology, and time-of-flight mass spectrometry. Files in this deposition are ordered in subdirectories that reflect the specifics.
Targets detection in smoke-screen image sequences using fractal and rough set theory
NASA Astrophysics Data System (ADS)
Yan, Xiaoke
2015-08-01
In this paper, a new algorithm for the detection of moving targets in smoke-screen image sequences is presented, which can combine three properties of pixel: grey, fractal dimensions and correlation between pixels by Rough Set. The first step is to locate and extract regions that may contain objects in an image by locally grey threshold technique. Secondly, the fractal dimensions of pixels are calculated, Smoke-Screen is done at different fractal dimensions. Finally, according to temporal and spatial correlations between different frames, the singular points can be filtered. The experimental results show that the algorithm can effectively increase detection probability and has robustness.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Fractal scaling of apparent soil moisture estimated from vertical planes of Vertisol pit images
NASA Astrophysics Data System (ADS)
Cumbrera, Ramiro; Tarquis, Ana M.; Gascó, Gabriel; Millán, Humberto
2012-07-01
SummaryImage analysis could be a useful tool for investigating the spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to define apparent soil moisture patterns from vertical planes of Vertisol pit images and (ii) to describe the scaling of apparent soil moisture distribution using fractal parameters. Twelve soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. Six of them were excavated in April/2011 and six pits were established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak™ digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ≈373 μm of the photographed soil pit. Each soil image was analyzed using two fractal scaling exponents, box counting (capacity) dimension (DBC) and interface fractal dimension (Di), and three prefractal scaling coefficients, the total number of boxes intercepting the foreground pattern at a unit scale (A), fractal lacunarity at the unit scale (Λ1) and Shannon entropy at the unit scale (S1). All the scaling parameters identified significant differences between both sets of spatial patterns. Fractal lacunarity was the best discriminator between apparent soil moisture patterns. Soil image interpretation with fractal exponents and prefractal coefficients can be incorporated within a site-specific agriculture toolbox. While fractal exponents convey information on space filling characteristics of the pattern, prefractal coefficients represent the investigated soil property as seen through a higher resolution microscope. In spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used in connection with traditional soil moisture sampling, which always renders punctual estimates.
High compression image and image sequence coding
NASA Technical Reports Server (NTRS)
Kunt, Murat
1989-01-01
The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.
Time Series Analysis OF SAR Image Fractal Maps: The Somma-Vesuvio Volcanic Complex Case Study
NASA Astrophysics Data System (ADS)
Pepe, Antonio; De Luca, Claudio; Di Martino, Gerardo; Iodice, Antonio; Manzo, Mariarosaria; Pepe, Susi; Riccio, Daniele; Ruello, Giuseppe; Sansosti, Eugenio; Zinno, Ivana
2016-04-01
The fractal dimension is a significant geophysical parameter describing natural surfaces representing the distribution of the roughness over different spatial scale; in case of volcanic structures, it has been related to the specific nature of materials and to the effects of active geodynamic processes. In this work, we present the analysis of the temporal behavior of the fractal dimension estimates generated from multi-pass SAR images relevant to the Somma-Vesuvio volcanic complex (South Italy). To this aim, we consider a Cosmo-SkyMed data-set of 42 stripmap images acquired from ascending orbits between October 2009 and December 2012. Starting from these images, we generate a three-dimensional stack composed by the corresponding fractal maps (ordered according to the acquisition dates), after a proper co-registration. The time-series of the pixel-by-pixel estimated fractal dimension values show that, over invariant natural areas, the fractal dimension values do not reveal significant changes; on the contrary, over urban areas, it correctly assumes values outside the natural surfaces fractality range and show strong fluctuations. As a final result of our analysis, we generate a fractal map that includes only the areas where the fractal dimension is considered reliable and stable (i.e., whose standard deviation computed over the time series is reasonably small). The so-obtained fractal dimension map is then used to identify areas that are homogeneous from a fractal viewpoint. Indeed, the analysis of this map reveals the presence of two distinctive landscape units corresponding to the Mt. Vesuvio and Gran Cono. The comparison with the (simplified) geological map clearly shows the presence in these two areas of volcanic products of different age. The presented fractal dimension map analysis demonstrates the ability to get a figure about the evolution degree of the monitored volcanic edifice and can be profitably extended in the future to other volcanic systems with
Statistical fractal border features for MRI breast mass images
NASA Astrophysics Data System (ADS)
Penn, Alan I.; Bolinger, Lizann; Loew, Murray H.
1998-06-01
MRI has been proposed as an alternative method to mammography for detecting and staging breast cancer. Recent studies have shown that architectural features of breast masses may be useful in improving specificity. Since fractal dimension (fd) has been correlated with roughness, and border roughness is an indicator of malignancy, the fd of the mass border is a promising architectural feature for achieving improved specificity. Previous methods of estimating the fd of the mass border have been unreliable because of limited data or overlay restrictive assumptions of the fractal model. We present preliminary results of a statistical approach in which a sample space of fd estimates is generated from a family of self-affine fractal models. The fd of the mass border is then estimated from the statistics of the sample space.
Edge detection and image segmentation of space scenes using fractal analyses
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Fuller, J. J.
1992-01-01
A method was developed for segmenting images of space scenes into manmade and natural components, using fractal dimensions and lacunarities. Calculations of these parameters are presented. Results are presented for a variety of aerospace images, showing that it is possible to perform edge detections of manmade objects against natural background such as those seen in an aerospace environment.
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
NASA Astrophysics Data System (ADS)
Habibi, Ali
1993-01-01
The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.
Improved triangular prism methods for fractal analysis of remotely sensed images
NASA Astrophysics Data System (ADS)
Zhou, Yu; Fung, Tung; Leung, Yee
2016-05-01
Feature extraction has been a major area of research in remote sensing, and fractal feature is a natural characterization of complex objects across scales. Extending on the modified triangular prism (MTP) method, we systematically discuss three factors closely related to the estimation of fractal dimensions of remotely sensed images. They are namely the (F1) number of steps, (F2) step size, and (F3) estimation accuracy of the facets' areas of the triangular prisms. Differing from the existing improved algorithms that separately consider these factors, we simultaneously take all factors to construct three new algorithms, namely the modification of the eight-pixel algorithm, the four corner and the moving-average MTP. Numerical experiments based on 4000 generated images show their superior performances over existing algorithms: our algorithms not only overcome the limitation of image size suffered by existing algorithms but also obtain similar average fractal dimension with smaller standard deviation, only 50% for images with high fractal dimensions. In the case of real-life application, our algorithms more likely obtain fractal dimensions within the theoretical range. Thus, the fractal nature uncovered by our algorithms is more reasonable in quantifying the complexity of remotely sensed images. Despite the similar performance of these three new algorithms, the moving-average MTP can mitigate the sensitivity of the MTP to noise and extreme values. Based on the numerical and real-life case study, we check the effect of the three factors, (F1)-(F3), and demonstrate that these three factors can be simultaneously considered for improving the performance of the MTP method.
ERIC Educational Resources Information Center
Barton, Ray
1990-01-01
Presented is an educational game called "The Chaos Game" which produces complicated fractal images. Two basic computer programs are included. The production of fractal images by the Sierpinski gasket and the Chaos Game programs is discussed. (CW)
Automatic Method to Classify Images Based on Multiscale Fractal Descriptors and Paraconsistent Logic
NASA Astrophysics Data System (ADS)
Pavarino, E.; Neves, L. A.; Nascimento, M. Z.; Godoy, M. F.; Arruda, P. F.; Neto, D. S.
2015-01-01
In this study is presented an automatic method to classify images from fractal descriptors as decision rules, such as multiscale fractal dimension and lacunarity. The proposed methodology was divided in three steps: quantification of the regions of interest with fractal dimension and lacunarity, techniques under a multiscale approach; definition of reference patterns, which are the limits of each studied group; and, classification of each group, considering the combination of the reference patterns with signals maximization (an approach commonly considered in paraconsistent logic). The proposed method was used to classify histological prostatic images, aiming the diagnostic of prostate cancer. The accuracy levels were important, overcoming those obtained with Support Vector Machine (SVM) and Best- first Decicion Tree (BFTree) classifiers. The proposed approach allows recognize and classify patterns, offering the advantage of giving comprehensive results to the specialists.
Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images
de Castro, J.; Méndez, A.; Tarquis, A. M.
2014-01-01
The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957
Fractal analysis of laplacian pyramidal filters applied to segmentation of soil images.
de Castro, J; Ballesteros, F; Méndez, A; Tarquis, A M
2014-01-01
The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957
Local intensity adaptive image coding
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1989-01-01
The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.
NASA Technical Reports Server (NTRS)
Huang, J.; Turcotte, D. L.
1989-01-01
The concept of fractal mapping is introduced and applied to digitized topography of Arizona. It is shown that the fractal statistics satisfy the topography of the state to a good approximation. The fractal dimensions and roughness amplitudes from subregions are used to construct maps of these quantities. It is found that the fractal dimension of actual two-dimensional topography is not affected by the adding unity to the fractal dimension of one-dimensional topographic tracks. In addition, consideration is given to the production of fractal maps from synthetically derived topography.
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe
2002-05-01
Fractal geometry concerns the study of non-Euclidean geometrical figures generated by a recursive sequence of mathematical operations. These figures show self-similar features in the sense that their shape, at a certain scale, is equal to, or at least 'similar' to, the same shape of the figure at a different scale or resolution. This property of scale invariance often occurs also in some natural events. The basis of the method is the very comparison of the space covering of one of those geometrical constructions, the 'Sierpinski Carpet,' and the particles location and size distribution in the portion of surface acquired in the images. Fractal analysis method, consists in the study of the size distribution structure and disposition modalities. Such an approach was presented in this paper with reference to the characterization of airborne dust produced in working environment through the evaluation of particle size disposition and distribution over a surface. Such a behavior, in fact, was assumed to be strictly correlated with 1) material surface physical characteristics and 2) on the modalities by which the material is 'shot' on the dust sample holder (glass support). To get this goal, a 2D-Fractal extractor has been used, calibrated to different area thresholding values, as a result binary sample image changes, originating different fractal resolutions plotted for the residual background area (Richardson plot). Changing the lower size thresholding value of the 2D-Fractal extractor algorithm, means to change the starting point of fractal measurements; in such way, it has been looked for possible differences of powders in the lower dimensional classes. The rate by which the lowest part of the plot goes down to residual area equal to zero, together with fractal dimensions (low and high, depending on average material curve) and their precision (R2) of 'zero curve' (Richardson Plot with area thresholding value equal to zero, i.e. whole fractal distribution), can be used
Saremi, Saeed; Sejnowski, Terrence J.
2016-01-01
Natural images are scale invariant with structures at all length scales. We formulated a geometric view of scale invariance in natural images using percolation theory, which describes the behavior of connected clusters on graphs. We map images to the percolation model by defining clusters on a binary representation for images. We show that critical percolating structures emerge in natural images and study their scaling properties by identifying fractal dimensions and exponents for the scale-invariant distributions of clusters. This formulation leads to a method for identifying clusters in images from underlying structures as a starting point for image segmentation. PMID:26415153
Saremi, Saeed; Sejnowski, Terrence J
2016-05-01
Natural images are scale invariant with structures at all length scales.We formulated a geometric view of scale invariance in natural images using percolation theory, which describes the behavior of connected clusters on graphs.We map images to the percolation model by defining clusters on a binary representation for images. We show that critical percolating structures emerge in natural images and study their scaling properties by identifying fractal dimensions and exponents for the scale-invariant distributions of clusters. This formulation leads to a method for identifying clusters in images from underlying structures as a starting point for image segmentation. PMID:26415153
Adaptive entropy coded subband coding of images.
Kim, Y H; Modestino, J W
1992-01-01
The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138
Redundancy reduction in image coding
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur; Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.
1993-01-01
We assess redundancy reduction in image coding in terms of the information acquired by the image-gathering process and the amount of data required to convey this information. A clear distinction is made between the theoretically minimum rate of data transmission, as measured by the entropy of the completely decorrelated data, and the actual rate of data transmission, as measured by the entropy of the encoded (incompletely decorrelated) data. It is shown that the information efficiency of the visual communication channel depends not only on the characteristics of the radiance field and the decorrelation algorithm, as is generally perceived, but also on the design of the image-gathering device, as is commonly ignored.
Fractal methods for extracting artificial objects from the unmanned aerial vehicle images
NASA Astrophysics Data System (ADS)
Markov, Eugene
2016-04-01
Unmanned aerial vehicles (UAVs) have become used increasingly in earth surface observations, with a special interest put into automatic modes of environmental control and recognition of artificial objects. Fractal methods for image processing well detect the artificial objects in digital space images but were not applied previously to the UAV-produced imagery. Parameters of photography, on-board equipment, and image characteristics differ considerably for spacecrafts and UAVs. Therefore, methods that work properly with space images can produce different results for the UAVs. In this regard, testing the applicability of fractal methods for the UAV-produced images and determining the optimal range of parameters for these methods represent great interest. This research is dedicated to the solution of this problem. Specific features of the earth's surface images produced with UAVs are described in the context of their interpretation and recognition. Fractal image processing methods for extracting artificial objects are described. The results of applying these methods to the UAV images are presented.
Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors
da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez
2015-01-01
The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification. PMID:26091501
Reconstruction of coded aperture images
NASA Technical Reports Server (NTRS)
Bielefeld, Michael J.; Yin, Lo I.
1987-01-01
Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.
Alzheimer's Disease Detection in Brain Magnetic Resonance Images Using Multiscale Fractal Analysis
Lahmiri, Salim; Boukadoum, Mounir
2013-01-01
We present a new automated system for the detection of brain magnetic resonance images (MRI) affected by Alzheimer's disease (AD). The MRI is analyzed by means of multiscale analysis (MSA) to obtain its fractals at six different scales. The extracted fractals are used as features to differentiate healthy brain MRI from those of AD by a support vector machine (SVM) classifier. The result of classifying 93 brain MRIs consisting of 51 images of healthy brains and 42 of brains affected by AD, using leave-one-out cross-validation method, yielded 99.18% ± 0.01 classification accuracy, 100% sensitivity, and 98.20% ± 0.02 specificity. These results and a processing time of 5.64 seconds indicate that the proposed approach may be an efficient diagnostic aid for radiologists in the screening for AD. PMID:24967286
NASA Astrophysics Data System (ADS)
Ahammer, Helmut; DeVaney, Trevor T. J.
2004-03-01
The boundary of a fractal object, represented in a two-dimensional space, is theoretically a line with an infinitely small width. In digital images this boundary or contour is limited to the pixel resolution of the image and the width of the line commonly depends on the edge detection algorithm used. The Minkowski dimension was evaluated by using three different edge detection algorithms (Sobel, Roberts, and Laplace operator). These three operators were investigated because they are very widely used and because their edge detection result is very distinct concerning the line width. Very common fractals (Sierpinski carpet and Koch islands) were investigated as well as the binary images from a cancer invasion assay taken with a confocal laser scanning microscope. The fractal dimension is directly proportional to the width of the contour line and the fact, that in practice very often the investigated objects are fractals only within a limited resolution range is considered too.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.
2016-10-01
Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the
Fractal analyses of osseous healing using Tuned Aperture Computed Tomography images
Seyedain, Ali; Webber, Richard L.; Nair, Umadevi P.; Piesco, Nicholas P.; Agarwal, Sudha; Mooney, Mark P.; Gröndahl, Hans-Göran
2016-01-01
The aim of this study was to evaluate osseous healing in mandibular defects using fractal analyses on conventional radiographs and tuned aperture computed tomography (TACT; OrthoTACT, Instrumentarium Imaging, Helsinki, Finland) images. Eighty test sites on the inferior margins of rabbit mandibles were subject to lesion induction and treated with one of the following: no treatment (controls); osteoblasts only; polymer matrix only; or osteoblast-polymer matrix (OPM) combination. Images were acquired using conventional radiography and TACT, including unprocessed TACT (TACT-U) and iteratively restored TACT (TACT-IR). Healing was followed up over time and images acquired at 3, 6, 9, and 12 weeks post-surgery. Fractal dimension (FD) was computed within regions of interest in the defects using the TACT workbench. Results were analyzed for effects produced by imaging modality, treatment modality, time after surgery and lesion location. Histomorphometric data were available to assess ground truth. Significant differences (p < 0.0001) were noted based on imaging modality with TACT-IR recording the highest mean fractal dimension (MFD), followed by TACT-U and conventional images, in that order. Sites treated with OPM recorded the highest MFDs among all treatment modalities (p < 0.0001). The highest MFD based on time was recorded at 3 weeks and differed significantly with 12 weeks (p < 0.035). Correlation of FD with results of histomorphometric data was high (r = 0.79; p < 0.001). The FD computed on TACT-IR showed the highest correlation with histomorphometric data, thus establishing the fact TACT is a more efficient and accurate imaging modality for quantification of osseous changes within healing bony defects. PMID:11519567
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Fractal evaluation of drug amorphicity from optical and scanning electron microscope images
NASA Astrophysics Data System (ADS)
Gavriloaia, Bogdan-Mihai G.; Vizireanu, Radu C.; Neamtu, Catalin I.; Gavriloaia, Gheorghe V.
2013-09-01
Amorphous materials are metastable, more reactive than the crystalline ones, and have to be evaluated before pharmaceutical compound formulation. Amorphicity is interpreted as a spatial chaos, and patterns of molecular aggregates of dexamethasone, D, were investigated in this paper by using fractal dimension, FD. Images having three magnifications of D were taken from an optical microscope, OM, and with eight magnifications, from a scanning electron microscope, SEM, were analyzed. The average FD for pattern irregularities of OM images was 1.538, and about 1.692 for SEM images. The FDs of the two kinds of images are less sensitive of threshold level. 3D images were shown to illustrate dependence of FD of threshold and magnification level. As a result, optical image of single scale is enough to characterize the drug amorphicity. As a result, the OM image at a single scale is enough to characterize the amorphicity of D.
Subband coding for image data archiving
NASA Technical Reports Server (NTRS)
Glover, Daniel; Kwatra, S. C.
1993-01-01
The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.
Subband coding for image data archiving
NASA Technical Reports Server (NTRS)
Glover, D.; Kwatra, S. C.
1992-01-01
The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.
Zone Specific Fractal Dimension of Retinal Images as Predictor of Stroke Incidence
Kumar, Dinesh Kant; Hao, Hao; Unnikrishnan, Premith; Kawasaki, Ryo; Mitchell, Paul
2014-01-01
Fractal dimensions (FDs) are frequently used for summarizing the complexity of retinal vascular. However, previous techniques on this topic were not zone specific. A new methodology to measure FD of a specific zone in retinal images has been developed and tested as a marker for stroke prediction. Higuchi's fractal dimension was measured in circumferential direction (FDC) with respect to optic disk (OD), in three concentric regions between OD boundary and 1.5 OD diameter from its margin. The significance of its association with future episode of stroke event was tested using the Blue Mountain Eye Study (BMES) database and compared against spectrum fractal dimension (SFD) and box-counting (BC) dimension. Kruskal-Wallis analysis revealed FDC as a better predictor of stroke (H = 5.80, P = 0.016, α = 0.05) compared with SFD (H = 0.51, P = 0.475, α = 0.05) and BC (H = 0.41, P = 0.520, α = 0.05) with overall lower median value for the cases compared to the control group. This work has shown that there is a significant association between zone specific FDC of eye fundus images with future episode of stroke while this difference is not significant when other FD methods are employed. PMID:25485298
Zone specific fractal dimension of retinal images as predictor of stroke incidence.
Aliahmad, Behzad; Kumar, Dinesh Kant; Hao, Hao; Unnikrishnan, Premith; Che Azemin, Mohd Zulfaezal; Kawasaki, Ryo; Mitchell, Paul
2014-01-01
Fractal dimensions (FDs) are frequently used for summarizing the complexity of retinal vascular. However, previous techniques on this topic were not zone specific. A new methodology to measure FD of a specific zone in retinal images has been developed and tested as a marker for stroke prediction. Higuchi's fractal dimension was measured in circumferential direction (FDC) with respect to optic disk (OD), in three concentric regions between OD boundary and 1.5 OD diameter from its margin. The significance of its association with future episode of stroke event was tested using the Blue Mountain Eye Study (BMES) database and compared against spectrum fractal dimension (SFD) and box-counting (BC) dimension. Kruskal-Wallis analysis revealed FDC as a better predictor of stroke (H = 5.80, P = 0.016, α = 0.05) compared with SFD (H = 0.51, P = 0.475, α = 0.05) and BC (H = 0.41, P = 0.520, α = 0.05) with overall lower median value for the cases compared to the control group. This work has shown that there is a significant association between zone specific FDC of eye fundus images with future episode of stroke while this difference is not significant when other FD methods are employed. PMID:25485298
Discrete Cosine Transform Image Coding With Sliding Block Codes
NASA Astrophysics Data System (ADS)
Divakaran, Ajay; Pearlman, William A.
1989-11-01
A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.
Compressed image transmission based on fountain codes
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Wu, Xinhong; Jiao, L. C.
2011-11-01
In this paper, we propose a joint source-channel coding (JSCC) scheme for image transmission over wireless channel. In the scheme, fountain codes are integrated into bit-plane coding for channel coding. Compared to traditional erasure codes for error correcting, such as Reed-Solomon codes, fountain codes are rateless and can generate sufficient symbols on the fly. Two schemes, the EEP (Equal Error Protection) scheme and the UEP (Unequal Error Protection) scheme are described in the paper. Furthermore, the UEP scheme performs better than the EEP scheme. The proposed scheme not only can adaptively adjust the length of fountain codes according to channel loss rate but also reconstruct image even on bad channel.
Landmine detection using IR image segmentation by means of fractal dimension analysis
NASA Astrophysics Data System (ADS)
Abbate, Horacio A.; Gambini, Juliana; Delrieux, Claudio; Castro, Eduardo H.
2009-05-01
This work is concerned with buried landmines detection by long wave infrared images obtained during the heating or cooling of the soil and a segmentation process of the images. The segmentation process is performed by means of a local fractal dimension analysis (LFD) as a feature descriptor. We use two different LFD estimators, box-counting dimension (BC), and differential box counting dimension (DBC). These features are computed in a per pixel basis, and the set of features is clusterized by means of the K-means method. This segmentation technique produces outstanding results, with low computational cost.
Badea, Alexandru Florin; Lupsor Platon, Monica; Crisan, Maria; Cattani, Carlo; Badea, Iulia; Pierro, Gaetano; Sannino, Gianpaolo; Baciut, Grigore
2013-01-01
The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD). It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of "real-time" elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology. PMID:23762183
Zaia, Annamaria
2015-01-01
Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest
Advanced Imaging Optics Utilizing Wavefront Coding.
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.
An edge preserving differential image coding scheme
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1992-01-01
Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.
An edge preserving differential image coding scheme
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility especially when coding medical or scientific images, where edge preservation is of utmost importance. We present a simple, easy to implement differential image coding system with excellent edge preservation properties. The coding system can be used over variable rate channels which makes it especially attractive for use in the packet network environment.
Bitplane Image Coding With Parallel Coefficient Processing.
Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor
2016-01-01
Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible. PMID:26441420
Coding and transmission of subband coded images on the Internet
NASA Astrophysics Data System (ADS)
Wah, Benjamin W.; Su, Xiao
2001-09-01
Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sabyasachi; Das, Nandan K.; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2014-02-01
The objective of the present work is to diagnose pre-cancer by wavelet transform and multi-fractal de-trended fluctuation analysis of DIC images of normal and different grades of cancer tissues. Our DIC imaging and fluctuation analysis methods (Discrete and continuous wavelet transform, MFDFA) confirm the ability to diagnose and detect the early stage of cancer in cervical tissue.
Azevedo-Marques, P M; Spagnoli, H F; Frighetto-Pereira, L; Menezes-Reis, R; Metzner, G A; Rangayyan, R M; Nogueira-Barbosa, M H
2015-08-01
Fractures with partial collapse of vertebral bodies are generically referred to as "vertebral compression fractures" or VCFs. VCFs can have different etiologies comprising trauma, bone failure related to osteoporosis, or metastatic cancer affecting bone. VCFs related to osteoporosis (benign fractures) and to cancer (malignant fractures) are commonly found in the elderly population. In the clinical setting, the differentiation between benign and malignant fractures is complex and difficult. This paper presents a study aimed at developing a system for computer-aided diagnosis to help in the differentiation between malignant and benign VCFs in magnetic resonance imaging (MRI). We used T1-weighted MRI of the lumbar spine in the sagittal plane. Images from 47 consecutive patients (31 women, 16 men, mean age 63 years) were studied, including 19 malignant fractures and 54 benign fractures. Spectral and fractal features were extracted from manually segmented images of 73 vertebral bodies with VCFs. The classification of malignant vs. benign VCFs was performed using the k-nearest neighbor classifier with the Euclidean distance. Results obtained show that combinations of features derived from Fourier and wavelet transforms, together with the fractal dimension, were able to obtain correct classification rate up to 94.7% with area under the receiver operating characteristic curve up to 0.95. PMID:26736364
Image content authentication based on channel coding
NASA Astrophysics Data System (ADS)
Zhang, Fan; Xu, Lei
2008-03-01
The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.
NASA Astrophysics Data System (ADS)
Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis
2010-11-01
Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in
Evaluation of super-resolution imager with binary fractal test target
NASA Astrophysics Data System (ADS)
Landeau, Stéphane
2014-10-01
Today, new generation of powerful non-linear image processing are used for real time super-resolution or noise reduction. Optronic imagers with such features are becoming difficult to assess, because spatial resolution and sensitivity are now related to scene content. Many algorithms include regularization process, which usually reduces image complexity to enhance spread edges or contours. Small important scene details can be then deleted by this kind of processing. In this paper, a binary fractal test target is presented, with a structured clutter pattern and an interesting autosimilarity multi-scale property. The apparent structured clutter of this test target gives a trade-off between a white noise, unlikely in real scenes, and very structured targets like MTF targets. Together with the fractal design of the target, an assessment method has been developed to evaluate automatically the non-linear effects on the acquired and processed image of the imager. The calculated figure of merit is to be directly comparable to the linear Fourier MTF. For this purpose the Haar wavelet elements distributed spatially and at different scales on the target are assimilated to the sine Fourier cycles at different frequencies. The probability of correct resolution indicates the ability to read correct Haar contrast among all Haar wavelet elements with a position constraint. For the method validation, a simulation of two different imager types has been done, a well-sampled linear system and an under-sampled one, coupled with super-resolution or noise reduction algorithms. The influence of the target contrast on the figures of merit is analyzed. Finally, the possible introduction of this new figure of merit in existing analytical range performance models, such as TRM4 (Fraunhofer IOSB) or NVIPM (NVESD) is discussed. Benefits and limitations of the method are also compared to the TOD (TNO) evaluation method.
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.
1997-01-01
The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.
NASA Astrophysics Data System (ADS)
Popescu, Dan P.; Flueraru, Costel; Mao, Youxin; Chang, Shoude; Sowa, Michael G.
2010-02-01
Two methods for analyzing OCT images of arterial tissues are tested. These methods are applied toward two types of samples: segments of arteries collected from atherosclerosis-prone Watanabe heritable hyper-lipidemic rabbits and pieces of porcine left descending coronary arteries without atherosclerosis. The first method is based on finding the attenuation coefficients for the OCT signal that propagates through various regions of the tissue. The second method involves calculating the fractal dimensions of the OCT signal textures in the regions of interest identified within the acquired images. A box-counting algorithm is used for calculating the fractal dimensions. Both parameters, the attenuation coefficient as well as the fractal dimension correlate very well with the anatomical features of both types of samples.
Medical image retrieval and analysis by Markov random fields and multi-scale fractal dimension.
Backes, André Ricardo; Gerhardinger, Leandro Cavaleri; Batista Neto, João do Espírito Santo; Bruno, Odemir Martinez
2015-02-01
Many Content-based Image Retrieval (CBIR) systems and image analysis tools employ color, shape and texture (in a combined fashion or not) as attributes, or signatures, to retrieve images from databases or to perform image analysis in general. Among these attributes, texture has turned out to be the most relevant, as it allows the identification of a larger number of images of a different nature. This paper introduces a novel signature which can be used for image analysis and retrieval. It combines texture with complexity extracted from objects within the images. The approach consists of a texture segmentation step, modeled as a Markov Random Field process, followed by the estimation of the complexity of each computed region. The complexity is given by a Multi-scale Fractal Dimension. Experiments have been conducted using an MRI database in both pattern recognition and image retrieval contexts. The results show the accuracy of the proposed method in comparison with other traditional texture descriptors and also indicate how the performance changes as the level of complexity is altered. PMID:25586375
Medical image retrieval and analysis by Markov random fields and multi-scale fractal dimension
NASA Astrophysics Data System (ADS)
Backes, André Ricardo; Cavaleri Gerhardinger, Leandro; do Espírito Santo Batista Neto, João; Martinez Bruno, Odemir
2015-02-01
Many Content-based Image Retrieval (CBIR) systems and image analysis tools employ color, shape and texture (in a combined fashion or not) as attributes, or signatures, to retrieve images from databases or to perform image analysis in general. Among these attributes, texture has turned out to be the most relevant, as it allows the identification of a larger number of images of a different nature. This paper introduces a novel signature which can be used for image analysis and retrieval. It combines texture with complexity extracted from objects within the images. The approach consists of a texture segmentation step, modeled as a Markov Random Field process, followed by the estimation of the complexity of each computed region. The complexity is given by a Multi-scale Fractal Dimension. Experiments have been conducted using an MRI database in both pattern recognition and image retrieval contexts. The results show the accuracy of the proposed method in comparison with other traditional texture descriptors and also indicate how the performance changes as the level of complexity is altered.
NASA Astrophysics Data System (ADS)
Yang, Y.; Li, H. T.; Han, Y. S.; Gu, H. Y.
2015-06-01
Image segmentation is the foundation of further object-oriented image analysis, understanding and recognition. It is one of the key technologies in high resolution remote sensing applications. In this paper, a new fast image segmentation algorithm for high resolution remote sensing imagery is proposed, which is based on graph theory and fractal net evolution approach (FNEA). Firstly, an image is modelled as a weighted undirected graph, where nodes correspond to pixels, and edges connect adjacent pixels. An initial object layer can be obtained efficiently from graph-based segmentation, which runs in time nearly linear in the number of image pixels. Then FNEA starts with the initial object layer and a pairwise merge of its neighbour object with the aim to minimize the resulting summed heterogeneity. Furthermore, according to the character of different features in high resolution remote sensing image, three different merging criterions for image objects based on spectral and spatial information are adopted. Finally, compared with the commercial remote sensing software eCognition, the experimental results demonstrate that the efficiency of the algorithm has significantly improved, and the result can maintain good feature boundaries.
Mossotti, Victor G.; Eldeeb, A. Raouf
2000-01-01
Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.
Computing Challenges in Coded Mask Imaging
NASA Technical Reports Server (NTRS)
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
ERIC Educational Resources Information Center
Esbenshade, Donald H., Jr.
1991-01-01
Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)
Coded Access Optical Sensor (CAOS) Imager
NASA Astrophysics Data System (ADS)
Riza, N. A.; Amin, M. J.; La Torre, J. P.
2015-04-01
High spatial resolution, low inter-pixel crosstalk, high signal-to-noise ratio (SNR), adequate application dependent speed, economical and energy efficient design are common goals sought after for optical image sensors. In optical microscopy, overcoming the diffraction limit in spatial resolution has been achieved using materials chemistry, optimal wavelengths, precision optics and nanomotion-mechanics for pixel-by-pixel scanning. Imagers based on pixelated imaging devices such as CCD/CMOS sensors avoid pixel-by-pixel scanning as all sensor pixels operate in parallel, but these imagers are fundamentally limited by inter-pixel crosstalk, in particular with interspersed bright and dim light zones. In this paper, we propose an agile pixel imager sensor design platform called Coded Access Optical Sensor (CAOS) that can greatly alleviate the mentioned fundamental limitations, empowering smart optical imaging for particular environments. Specifically, this novel CAOS imager engages an application dependent electronically programmable agile pixel platform using hybrid space-time-frequency coded multiple-access of the sampled optical irradiance map. We demonstrate the foundational working principles of the first experimental electronically programmable CAOS imager using hybrid time-frequency multiple access sampling of a known high contrast laser beam irradiance test map, with the CAOS instrument based on a Texas Instruments (TI) Digital Micromirror Device (DMD). This CAOS instrument provides imaging data that exhibits 77 dB electrical SNR and the measured laser beam image irradiance specifications closely match (i.e., within 0.75% error) the laser manufacturer provided beam image irradiance radius numbers. The proposed CAOS imager can be deployed in many scientific and non-scientific applications where pixel agility via electronic programmability can pull out desired features in an irradiance map subject to the CAOS imaging operation.
Image coding compression based on DCT
NASA Astrophysics Data System (ADS)
Feng, Fei; Liu, Peixue; Jiang, Baohua
2012-04-01
With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression. PMID:23589187
Coded-aperture imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.
1989-11-01
Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.
Transform coding of stereo image residuals.
Moellenhoff, M S; Maier, M W
1998-01-01
Stereo image compression is of growing interest because of new display technologies and the needs of telepresence systems. Compared to monoscopic image compression, stereo image compression has received much less attention. A variety of algorithms have appeared in the literature that make use of the cross-view redundancy in the stereo pair. Many of these use the framework of disparity-compensated residual coding, but concentrate on the disparity compensation process rather than the post compensation coding process. This paper studies specialized coding methods for the residual image produced by disparity compensation. The algorithms make use of theoretically expected and experimentally observed characteristics of the disparity-compensated stereo residual to select transforms and quantization methods. Performance is evaluated on mean squared error (MSE) and a stereo-unique metric based on image registration. Exploiting the directional characteristics in a discrete cosine transform (DCT) framework provides its best performance below 0.75 b/pixel for 8-b gray-scale imagery and below 2 b/pixel for 24-b color imagery, In the wavelet algorithm, roughly a 50% reduction in bit rate is possible by encoding only the vertical channel, where much of the stereo information is contained. The proposed algorithms do not incur substantial computational burden beyond that needed for any disparity-compensated residual algorithm. PMID:18276294
Coded-aperture imaging in nuclear medicine
NASA Technical Reports Server (NTRS)
Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.
1989-01-01
Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.
A Germanium-Based, Coded Aperture Imager
Ziock, K P; Madden, N; Hull, E; William, C; Lavietes, T; Cork, C
2001-10-31
We describe a coded-aperture based, gamma-ray imager that uses a unique hybrid germanium detector system. A planar, germanium strip detector, eleven millimeters thick is followed by a coaxial detector. The 19 x 19 strip detector (2 mm pitch) is used to determine the location and energy of low energy events. The location of high energy events are determined from the location of the Compton scatter in the planar detector and the energy is determined from the sum of the coaxial and planar energies. With this geometry, we obtain useful quantum efficiency in a position-sensitive mode out to 500 keV. The detector is used with a 19 x 17 URA coded aperture to obtain spectrally resolved images in the gamma-ray band. We discuss the performance of the planar detector, the hybrid system and present images taken of laboratory sources.
Finding maximum JPEG image block code size
NASA Astrophysics Data System (ADS)
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
NASA Astrophysics Data System (ADS)
Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.
2012-01-01
For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.
NASA Astrophysics Data System (ADS)
Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.
2011-09-01
For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.
Scalable image coding for interactive image communication over networks
NASA Astrophysics Data System (ADS)
Yoon, Sung H.; Lee, Ji H.; Alexander, Winser E.
2000-12-01
This paper presents a new, scalable coding technique that can be used in interactive image/video communications over the Internet. The proposed technique generates a fully embedded bit stream that provides scalability with high quality for the whole image and it can be used to implement region based coding as well. The embedded bit stream is comprised of a basic layer and many enhancement layers. The enhancement layers add refinement to the quality of the image that has been reconstructed using the basic layer. The proposed coding technique uses multiple quantizers with thresholds (QT) for layering and it creates a bit plane for each layer. The bit plane is then partitioned into sets of small areas to be coded independently. Run length and entropy coding are applied to each of the sets to provide scalability for the entire image resulting in high picture quality in the user-specific area of interest (ROI). We tested this technique by applying it to various test images and the results consistently show high level of performance.
Coded source imaging simulation with visible light
NASA Astrophysics Data System (ADS)
Wang, Sheng; Zou, Yubin; Zhang, Xueshuang; Lu, Yuanrong; Guo, Zhiyu
2011-09-01
A coded source could increase the neutron flux with high L/ D ratio. It may benefit a neutron imaging system with low yield neutron source. Visible light CSI experiments were carried out to test the physical design and reconstruction algorithm. We used a non-mosaic Modified Uniformly Redundant Array (MURA) mask to project the shadow of black/white samples on a screen. A cooled-CCD camera was used to record the image on the screen. Different mask sizes and amplification factors were tested. The correlation, Wiener filter deconvolution and Richardson-Lucy maximum likelihood iteration algorithm were employed to reconstruct the object imaging from the original projection. The results show that CSI can benefit the low flux neutron imaging with high background noise.
Complementary lattice arrays for coded aperture imaging
NASA Astrophysics Data System (ADS)
Ding, Jie; Noshad, Mohammad; Tarokh, Vahid
2016-05-01
In this work, we consider complementary lattice arrays in order to enable a broader range of designs for coded aperture imaging systems. We provide a general framework and methods that generate richer and more flexible designs than existing ones. Besides this, we review and interpret the state-of-the-art uniformly redundant arrays (URA) designs, broaden the related concepts, and further propose some new design methods.
NASA Astrophysics Data System (ADS)
Jia, Peng; Cai, Dongmei; Wang, Dong
2014-11-01
A parallel blind deconvolution algorithm is presented. The algorithm contains the constraints of the point spread function (PSF) derived from the physical process of the imaging. Additionally, in order to obtain an effective restored image, the fractal energy ratio is used as an evaluation criterion to estimate the quality of the image. This algorithm is fine-grained parallelized to increase the calculation speed. Results of numerical experiments and real experiments indicate that this algorithm is effective.
A formal link of anticipatory mental imaging with fractal features of biological time
NASA Astrophysics Data System (ADS)
Bounias, Michel; Bonaly, André
2001-06-01
Previous works have supported the proposition that biological organisms are endowed with perceptive functions based of fixed points in mental chaining sequences (Bounias and Bonaly, 1997). Former conjectures proposed that memory could be fractal (Dubois, 1990; Bonaly, 1989, 1994) and that the biological time, standing at the intersection of the arrows of past events and of future events, exhibit some similarity with the construction of a Koch-like structure (Bonaly, 2000). A formal examination of the biological system of perception let now appear that the perception of time occurs at the intersection of two consecutive fixed point sequences. Therefore, time-flow is mapped by sequences of fixed points of which each is the convergence value of sequences of neuronal configurations. Since the latter are indexed by the ordered sequences of closed Poincaré sections determining the physical arrow of time (Bonaly and Bounias, 1995), there exists a surjective Lipschitz-Hölder mapping of physical time onto system-perceived time. The succession of consecutive fixed points of the perceptive sequence in turn constitute a sequence whose properties account for the apparent continuity of time-perception, in the same time as they fulfill the basically nonlinearity of time as a general parameter. A generator polygon is shown to be constituted by four sides: (i) the interval between two consecutive bursts of perceptions provides the base, with a projection paralleling the arrow of physical time; (ii) the top is constituted by the sequence of repeated fixed points accounting for the mental image got from the first burst, and further maintained up to the formation of the next fixed point; (iii) the first lateral side is the difference (Lk-1 to Lk) between the measure (L) of neuronal chains leading to the first image a(uk), and: (iv) the second lateral side is the difference of measure between Lk and Lk+1. The equation of the system is therefore of the incursive type, and this
NASA Astrophysics Data System (ADS)
Xie, Xufen; Wang, Hongyuan; Zhang, Wei
2015-11-01
This paper deals with the estimation of an in-orbit modulation transfer function (MTF) by a remote sensing image sequence, which is often difficult to measure because of a lack of suitable target images. A model is constructed which combines a fractal Brownian motion model that describes natural images stochastic fractal characteristics, with an inverse Fourier transform of an ideal remote sensing image amplitude spectrum. The model is used to decouple the blurring effect and an ideal natural image. Then, a model of MTF statistical estimation is built by standard deviation of the image sequence amplitude spectrum. Furthermore, model parameters are estimated by the ergodicity assumption of a remote sensing image sequence. Finally, the results of the statistical MTF estimation method are given and verified. The experimental results demonstrate that the method is practical and effective, and the relative deviation at the Nyquist frequency between the edge method and the method in this paper is less than 5.74%. The MTF estimation method is applicable for remote sensing image sequences and is not restricted by the characteristic target of images.
Fast-neutron, coded-aperture imager
NASA Astrophysics Data System (ADS)
Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.
2015-06-01
This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led
NASA Astrophysics Data System (ADS)
Feng, Bin; Zhang, Chengshuo; Xu, Baoshu; Shi, Zelin
2015-11-01
Artefacts and noise degrade the decoded image of a wavefront coding infrared imaging system, which usually results in the decoded image being inferior to the in-focus infrared image of a conventional infrared imaging system. The previous letter showed that the decoded image fell behind the in-focus infrared image. For comparison, a bar target experiment at temperature of 20°C and two groups of outdoor experiments at temperatures of 28°C and 70°C are respectively conducted. Experimental results prove that a wavefront coding infrared imaging system can achieve the decoded image being approximating to its corresponding in-focus infrared image.
Dual-sided coded-aperture imager
Ziock, Klaus-Peter
2009-09-22
In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.
Modulated lapped transforms in image coding
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Rao, K. R.
1994-05-01
The class of modulated lapped transforms (MLT) with extended overlap is investigated in image coding. The finite-length-signals implementation using symmetric extensions is introduced and human visual sensitivity arrays are computed. Theoretical comparisons with other popular transforms are carried and simulations are made using intraframe coders. Emphasis is given in transmission over packet networks assuming high rate of data losses. The MLT with overlap factor 2 is shown to be superior in all our tests with bonus features such as greater robustness against block losses.
NASA Astrophysics Data System (ADS)
Mosquera Lopez, Clara; Agaian, Sos
2013-02-01
Prostate cancer detection and staging is an important step towards patient treatment selection. Advancements in digital pathology allow the application of new quantitative image analysis algorithms for computer-assisted diagnosis (CAD) on digitized histopathology images. In this paper, we introduce a new set of features to automatically grade pathological images using the well-known Gleason grading system. The goal of this study is to classify biopsy images belonging to Gleason patterns 3, 4, and 5 by using a combination of wavelet and fractal features. For image classification we use pairwise coupling Support Vector Machine (SVM) classifiers. The accuracy of the system, which is close to 97%, is estimated through three different cross-validation schemes. The proposed system offers the potential for automating classification of histological images and supporting prostate cancer diagnosis.
FRACTAL DIMENSION OF GALAXY ISOPHOTES
Thanki, Sandip; Rhee, George; Lepp, Stephen E-mail: grhee@physics.unlv.edu
2009-09-15
In this paper we investigate the use of the fractal dimension of galaxy isophotes in galaxy classification. We have applied two different methods for determining fractal dimensions to the isophotes of elliptical and spiral galaxies derived from CCD images. We conclude that fractal dimension alone is not a reliable tool but that combined with other parameters in a neural net algorithm the fractal dimension could be of use. In particular, we have used three parameters to segregate the ellipticals and lenticulars from the spiral galaxies in our sample. These three parameters are the correlation fractal dimension D {sub corr}, the difference between the correlation fractal dimension and the capacity fractal dimension D {sub corr} - D {sub cap}, and, thirdly, the B - V color of the galaxy.
Fractals in biology and medicine
NASA Technical Reports Server (NTRS)
Havlin, S.; Buldyrev, S. V.; Goldberger, A. L.; Mantegna, R. N.; Ossadnik, S. M.; Peng, C. K.; Simons, M.; Stanley, H. E.
1995-01-01
Our purpose is to describe some recent progress in applying fractal concepts to systems of relevance to biology and medicine. We review several biological systems characterized by fractal geometry, with a particular focus on the long-range power-law correlations found recently in DNA sequences containing noncoding material. Furthermore, we discuss the finding that the exponent alpha quantifying these long-range correlations ("fractal complexity") is smaller for coding than for noncoding sequences. We also discuss the application of fractal scaling analysis to the dynamics of heartbeat regulation, and report the recent finding that the normal heart is characterized by long-range "anticorrelations" which are absent in the diseased heart.
ERIC Educational Resources Information Center
Simoson, Andrew J.
2009-01-01
This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)
Real-time pseudocolor coding thermal ghost imaging.
Duan, Deyang; Xia, Yunjie
2014-01-01
In this work, a color ghost image of a black-and-white object is obtained by a real-time pseudocolor coding technique that includes equal spatial frequency pseudocolor coding and equal density pseudocolor coding. This method makes the black-and-white ghost image more conducive to observation. Furthermore, since the ghost imaging comes from the intensity cross-correlations of the two beams, ghost imaging with the real-time pseudocolor coding technique is better than classical optical imaging with the same technique in overcoming the effects of light interference. PMID:24561954
The IRMA code for unique classification of medical images
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.; Schubert, Henning; Keysers, Daniel; Kohnen, Michael; Wein, Berthold B.
2003-05-01
Modern communication standards such as Digital Imaging and Communication in Medicine (DICOM) include non-image data for a standardized description of study, patient, or technical parameters. However, these tags are rather roughly structured, ambiguous, and often optional. In this paper, we present a mono-hierarchical multi-axial classification code for medical images and emphasize its advantages for content-based image retrieval in medical applications (IRMA). Our so called IRMA coding system consists of four axes with three to four positions, each in {0,...9,a,...,z}, where "0" denotes "unspecified" to determine the end of a path along an axis. In particular, the technical code (T) describes the imaging modality; the directional code (D) models body orientations; the anatomical code (A) refers to the body region examined; and the biological code (B) describes the biological system examined. Hence, the entire code results in a character string of not more than 13 characters (IRMA: TTTT - DDD - AAA - BBB). The code can be easily extended by introducing characters in certain code positions, e.g., if new modalities are introduced. In contrast to other approaches, mixtures of one- and two-literal code positions are avoided which simplifies automatic code processing. Furthermore, the IRMA code obviates ambiguities resulting from overlapping code elements within the same level. Although this code was originally designed to be used in the IRMA project, other use of it is welcome.
New freeform NURBS imaging design code
NASA Astrophysics Data System (ADS)
Chrisp, Michael P.
2014-12-01
A new optical imaging design code for NURBS freeform surfaces is described, with reduced optimization cycle times due to its fast raytrace engine, and the ability to handle larger NURBS surfaces because of its improved optimization algorithms. This program, FANO (Fast Accurate NURBS Optimization), is then applied to an f/2 three mirror anastigmat design. Given the same optical design parameters, the optical system with NURBS freeform surfaces has an average r.m.s. spot size of 4 microns. This spot size is six times smaller than the conventional aspheric design, showing that the use of NURBS freeform surfaces can improve the performance of three mirror anastigmats for the next generation of smaller pixel size, larger format detector arrays.
Coherent diffractive imaging using randomly coded masks
Seaberg, Matthew H.; D'Aspremont, Alexandre; Turner, Joshua J.
2015-12-07
We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even free electron laser experiments.
Design of Pel Adaptive DPCM coding based upon image partition
NASA Astrophysics Data System (ADS)
Saitoh, T.; Harashima, H.; Miyakawa, H.
1982-01-01
A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).
Image analysis of human corneal endothelial cells based on fractal theory
NASA Astrophysics Data System (ADS)
Zhang, Zhi; Luo, Qingming; Zeng, Shaoqun; Zhang, Xinyu; Huang, Dexiu; Chen, Weiguo
1999-09-01
A fast method is developed to quantitatively characterize the shape of human corneal endothelial cells with fractal theory and applied to analyze microscopic photographs of human corneal endothelial cells. The results show that human corneal endothelial cells possess the third characterization parameter-- fractal dimension, besides another two characterization parameter (its size and shape). Compared with tradition method, this method has many advantages, such as automatism, speediness, parallel processing and can be used to analyze large numbers of endothelial cells, the obtained values are statistically significant, it offers a new approach for clinic diagnosis of endothelial cells.
New Methods for Lossless Image Compression Using Arithmetic Coding.
ERIC Educational Resources Information Center
Howard, Paul G.; Vitter, Jeffrey Scott
1992-01-01
Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…
Fractal Feature Analysis Of Beef Marblingpatterns
NASA Astrophysics Data System (ADS)
Chen, Kunjie; Qin, Chunfang
The purpose of this study is to investigate fractal behavior of beef marbling patterns and to explore relationships between fractal dimensions and marbling scores. Authors firstly extracted marbling images from beef rib-eye crosssection images using computer image processing technologies and then implemented the fractal analysis on these marbling images based on the pixel covering method. Finally box-counting fractal dimension (BFD) and informational fractal dimension (IFD) of one hundred and thirty-five beef marbling images were calculated and plotted against the beef marbling scores. The results showed that all beef marbling images exhibit fractal behavior over the limited range of scales accessible to analysis. Furthermore, their BFD and IFD are closely related to the score of beef marbling, suggesting that fractal analyses can provide us a potential tool to calibrate the score of beef marbling.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
PynPoint code for exoplanet imaging
NASA Astrophysics Data System (ADS)
Amara, A.; Quanz, S. P.; Akeret, J.
2015-04-01
We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).
Garrison, J.R., Jr.; Pearn, W.C.; von Rosenberg, D. W. )
1992-01-01
In this paper reservoir rock/pore systems are considered natural fractal objects and modeled as and compared to the regular fractal Menger Sponge and Sierpinski Carpet. The physical properties of a porous rock are, in part, controlled by the geometry of the pore system. The rate at which a fluid or electrical current can travel through the pore system of a rock is controlled by the path along which it must travel. This path is a subset of the overall geometry of the pore system. Reservoir rocks exhibit self-similarity over a range of length scales suggesting that fractal geometry offers a means of characterizing these complex objects. The overall geometry of a rock/pore system can be described, conveniently and concisely, in terms of effective fractal dimensions. The rock/pore system is modeled as the fractal Menger Sponge. A cross section through the rock/pore system, such as an image of a thin-section of a rock, is modeled as the fractal Sierpinski Carpet, which is equivalent to the face of the Menger Sponge.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
Coded source neutron imaging at the PULSTAR reactor
Xiao, Ziyu; Mishra, Kaushal; Hawari, Ayman; Bingham, Philip R; Bilheux, Hassina Z; Tobin Jr, Kenneth William
2011-01-01
A neutron imaging facility is located on beam-tube No.5 of the 1-MW PULSTAR reactor at North Carolina State University. An investigation of high resolution imaging using the coded source imaging technique has been initiated at the facility. Coded imaging uses a mosaic of pinholes to encode an aperture, thus generating an encoded image of the object at the detector. To reconstruct the image data received by the detector, the corresponding decoding patterns are used. The optimized design of coded mask is critical for the performance of this technique and will depend on the characteristics of the imaging beam. In this work, a 34 x 38 uniformly redundant array (URA) coded aperture system is studied for application at the PULSTAR reactor neutron imaging facility. The URA pattern was fabricated on a 500 ?m gadolinium sheet. Simulations and experiments with a pinhole object have been conducted using the Gd URA and the optimized beam line.
Wavelet based hierarchical coding scheme for radar image compression
NASA Astrophysics Data System (ADS)
Sheng, Wen; Jiao, Xiaoli; He, Jifeng
2007-12-01
This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.
Target Detection Using Fractal Geometry
NASA Technical Reports Server (NTRS)
Fuller, J. Joseph
1991-01-01
The concepts and theory of fractal geometry were applied to the problem of segmenting a 256 x 256 pixel image so that manmade objects could be extracted from natural backgrounds. The two most important measurements necessary to extract these manmade objects were fractal dimension and lacunarity. Provision was made to pass the manmade portion to a lookup table for subsequent identification. A computer program was written to construct cloud backgrounds of fractal dimensions which were allowed to vary between 2.2 and 2.8. Images of three model space targets were combined with these backgrounds to provide a data set for testing the validity of the approach. Once the data set was constructed, computer programs were written to extract estimates of the fractal dimension and lacunarity on 4 x 4 pixel subsets of the image. It was shown that for clouds of fractal dimension 2.7 or less, appropriate thresholding on fractal dimension and lacunarity yielded a 64 x 64 edge-detected image with all or most of the cloud background removed. These images were enhanced by an erosion and dilation to provide the final image passed to the lookup table. While the ultimate goal was to pass the final image to a neural network for identification, this work shows the applicability of fractal geometry to the problems of image segmentation, edge detection and separating a target of interest from a natural background.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Multiview image and depth map coding for holographic TV system
NASA Astrophysics Data System (ADS)
Senoh, Takanori; Wakunami, Koki; Ichihashi, Yasuyuki; Sasaki, Hisayuki; Oi, Ryutaro; Yamamoto, Kenji
2014-11-01
A holographic TV system based on multiview image and depth map coding and the analysis of coding noise effects in reconstructed images is proposed. A major problem for holographic TV systems is the huge amount of data that must be transmitted. It has been shown that this problem can be solved by capturing a three-dimensional scene with multiview cameras, deriving depth maps from multiview images or directly capturing them, encoding and transmitting the multiview images and depth maps, and generating holograms at the receiver side. This method shows the same subjective image quality as hologram data transmission with about 1/97000 of the data rate. Speckle noise, which masks coding noise when the coded bit rate is not extremely low, is shown to be the main determinant of reconstructed holographic image quality.
Texture Analysis In Cytology Using Fractals
NASA Astrophysics Data System (ADS)
Basu, Santanu; Barba, Joseph; Chan, K. S.
1990-01-01
We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory in the area of medical image analysis for texture description. The specific goal of this research is to utilize "fractal dimension" to discriminate between normal and cancerous human cells. In particular, we have considered four types of cells namely, breast, bronchial, ovarian and uterine. A method based on fractal Brownian motion theory is employed to compute the "fractal dimension" of cells. Experiments with real images reveal that the range of scales over which the cells exhibit fractal property can be used as the discriminatory feature to identify cancerous cells.
Matsueda, Hiroaki; Lee, Ching Hua
2015-03-10
We examine singular value spectrum of a class of two-dimensional fractal images. We find that the spectra can be mapped onto entanglement spectra of free fermions in one dimension. This exact mapping tells us that the singular value decomposition is a way of detecting a holographic relation between classical and quantum systems.
Coded source neutron imaging with a MURA mask
NASA Astrophysics Data System (ADS)
Zou, Y. B.; Schillinger, B.; Wang, S.; Zhang, X. S.; Guo, Z. Y.; Lu, Y. R.
2011-09-01
In coded source neutron imaging the single aperture commonly used in neutron radiography is replaced with a coded mask. Using a coded source can improve the neutron flux at the sample plane when a very high L/ D ratio is needed. The coded source imaging is a possible way to reduce the exposure time to get a neutron image with very high L/ D ratio. A 17×17 modified uniformly redundant array coded source was tested in this work. There are 144 holes of 0.8 mm diameter on the coded source. The neutron flux from the coded source is as high as from a single 9.6 mm aperture, while its effective L/ D is the same as in the case of a 0.8 mm aperture. The Richardson-Lucy maximum likelihood algorithm was used for image reconstruction. Compared to an in-line phase contrast neutron image taken with a 1 mm aperture, it takes much less time for the coded source to get an image of similar quality.
Image coding approach based on multiscale matching pursuits operation
NASA Astrophysics Data System (ADS)
Li, Hui; Wolff, Ingo
1998-12-01
A new image coding technique based on the Multiscale Matching Pursuits (MMP) approach is presented. Using a pre-defined dictionary set, which consists of a limited amount of elements, the MMP approach can decompose/encode images on different image scales and reconstruct/decode the image by the same dictionary. The MMP approach can be used to represent different scale image texture as well as the whole image. Instead of the pixel-based image representation, the MMP method represents the image texture as an index of a dictionary and thereby can encode the image with low data volume. Based on the MMP operation, the image content can be coded in an order from global to local and detail.
Fractal analysis for assessing tumour grade in microscopic images of breast tissue
NASA Astrophysics Data System (ADS)
Tambasco, Mauro; Costello, Meghan; Newcomb, Chris; Magliocco, Anthony M.
2007-03-01
In 2006, breast cancer is expected to continue as the leading form of cancer diagnosed in women, and the second leading cause of cancer mortality in this group. A method that has proven useful for guiding the choice of treatment strategy is the assessment of histological tumor grade. The grading is based upon the mitosis count, nuclear pleomorphism, and tubular formation, and is known to be subject to inter-observer variability. Since cancer grade is one of the most significant predictors of prognosis, errors in grading can affect patient management and outcome. Hence, there is a need to develop a breast cancer-grading tool that is minimally operator dependent to reduce variability associated with the current grading system, and thereby reduce uncertainty that may impact patient outcome. In this work, we explored the potential of a computer-based approach using fractal analysis as a quantitative measure of cancer grade for breast specimens. More specifically, we developed and optimized computational tools to compute the fractal dimension of low- versus high-grade breast sections and found them to be significantly different, 1.3+/-0.10 versus 1.49+/-0.10, respectively (Kolmogorov-Smirnov test, p<0.001). These results indicate that fractal dimension (a measure of morphologic complexity) may be a useful tool for demarcating low- versus high-grade cancer specimens, and has potential as an objective measure of breast cancer grade. Such prognostic value could provide more sensitive and specific information that would reduce inter-observer variability by aiding the pathologist in grading cancers.
Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications
Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth
2013-06-01
Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.
Progressive Image Coding by Hierarchical Linear Approximation.
ERIC Educational Resources Information Center
Wu, Xiaolin; Fang, Yonggang
1994-01-01
Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…
Perceptually lossless coding of digital monochrome ultrasound images
NASA Astrophysics Data System (ADS)
Wu, David; Tan, Damian M.; Griffiths, Tania; Wu, Hong Ren
2005-07-01
A preliminary investigation of encoding monochrome ultrasound images with a novel perceptually lossless coder is presented. Based on the JPEG 2000 coding framework, the proposed coder employs a vision model to identify and remove visually insignificant/irrelevant information. Current simulation results have shown coding performance gains over the JPEG compliant LOCO lossless and JPEG 2000 lossless coders without any perceivable distortion.
NASA Technical Reports Server (NTRS)
Bruno, B. C.; Taylor, G. J.; Rowland, S. K.; Lucey, P. G.; Self, S.
1992-01-01
Results are presented of a preliminary investigation of the fractal nature of the plan-view shapes of lava flows in Hawaii (based on field measurements and aerial photographs), as well as in Idaho and the Galapagos Islands (using aerial photographs only). The shapes of the lava flow margins are found to be fractals: lava flow shape is scale-invariant. This observation suggests that nonlinear forces are operating in them because nonlinear systems frequently produce fractals. A'a and pahoehoe flows can be distinguished by their fractal dimensions (D). The majority of the a'a flows measured have D between 1.05 and 1.09, whereas the pahoehoe flows generally have higher D (1.14-1.23). The analysis is extended to other planetary bodies by measuring flows from orbital images of Venus, Mars, and the moon. All are fractal and have D consistent with the range of terrestrial a'a and have D consistent with the range of terrestrial a'a and pahoehoe values.
Spatially-varying IIR filter banks for image coding
NASA Technical Reports Server (NTRS)
Chung, Wilson C.; Smith, Mark J. T.
1992-01-01
This paper reports on the application of spatially variant infinite impulse response (IIR) filter banks to subband image coding. The new filter bank is based on computationally efficient recursive polyphase decompositions that dynamically change in response to the input signal. In the absence of quantization, reconstruction can be made exact. However, by proper choice of an adaptation scheme, we show that subband image coding based on time varying filter banks can yield improvement over the use of conventional filter banks.
Collaborative Image Coding and Transmission over Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Wu, Min; Chen, Chang Wen
2006-12-01
The imaging sensors are able to provide intuitive visual information for quick recognition and decision. However, imaging sensors usually generate vast amount of data. Therefore, processing and coding of image data collected in a sensor network for the purpose of energy efficient transmission poses a significant technical challenge. In particular, multiple sensors may be collecting similar visual information simultaneously. We propose in this paper a novel collaborative image coding and transmission scheme to minimize the energy for data transmission. First, we apply a shape matching method to coarsely register images to find out maximal overlap to exploit the spatial correlation between images acquired from neighboring sensors. For a given image sequence, we transmit background image only once. A lightweight and efficient background subtraction method is employed to detect targets. Only the regions of target and their spatial locations are transmitted to the monitoring center. The whole image can then be reconstructed by fusing the background and the target images as well as their spatial locations. Experimental results show that the energy for image transmission can indeed be greatly reduced with collaborative image coding and transmission.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Print protection using high-frequency fractal noise
NASA Astrophysics Data System (ADS)
Mahmoud, Khaled W.; Blackledge, Jonathon M.; Datta, Sekharjit; Flint, James A.
2004-06-01
All digital images are band-limited to a degree that is determined by a spatial extent of the point spread function; the bandwidth of the image being determined by the optical transfer function. In the printing industry, the limit is determined by the resolution of the printed material. By band limiting the digital image in such away that the printed document maintains its fidelity, it is possible to use the out-of-band frequency space to introduce low amplitude coded data that remains hidden in the image. In this way, a covert signature can be embedded into an image to provide a digital watermark, which is sensitive to reproduction. In this paper a high frequency fractal noise is used as a low amplitude signal. A statistically robust solution to the authentication of printed material using high-fractal noise is proposed here which is based on cross-entropy metrics to provide a statistical confidence test. The fractal watermark is based on application of self-affine fields, which is suitable for documents containing high degree of texture. In principle, this new approach will allow batch tracking to be performed using coded data that has been embedded into the high frequency components of the image whose statistical characteristics are dependent on the printer/scanner technology. The details of this method as well as experimental results are presented.
NASA Astrophysics Data System (ADS)
Martin-Fernandez, Marcos; Alberola-Lopez, Carlos; Guerrero-Rodriguez, David; Ruiz-Alzola, Juan
2000-12-01
In this paper we propose a novel lossless coding scheme for medical images that allows the final user to switch between a lossy and a lossless mode. This is done by means of a progressive reconstruction philosophy (which can be interrupted at will) so we believe that our scheme gives a way to trade off between the accuracy needed for medical diagnosis and the information reduction needed for storage and transmission. We combine vector quantization, run-length bit plane and entropy coding. Specifically, the first step is a vector quantization procedure; the centroid codes are Huffman- coded making use of a set of probabilities that are calculated in the learning phase. The image is reconstructed at the coder in order to obtain the error image; this second image is divided in bit planes, which are then run-length and Huffman coded. A second statistical analysis is performed during the learning phase to obtain the parameters needed in this final stage. Our coder is currently trained for hand-radiographs and fetal echographies. We compare our results for this two types of images to classical results on bit plane coding and the JPEG standard. Our coder turns out to outperform both of them.
Coded Excitation Plane Wave Imaging for Shear Wave Motion Detection
Song, Pengfei; Urban, Matthew W.; Manduca, Armando; Greenleaf, James F.; Chen, Shigao
2015-01-01
Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave signal-to-noise-ratio (SNR) compared to conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2-4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (Body Mass Index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue. PMID:26168181
Code-modulated interferometric imaging system using phased arrays
NASA Astrophysics Data System (ADS)
Chauhan, Vikas; Greene, Kevin; Floyd, Brian
2016-05-01
Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sabyasachi; Das, Nandan K.; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2014-05-01
DIC (Differential Interference Contrast Image) images of cervical pre-cancer tissues are taken from epithelium region, on which wavelet transform and multi-fractal analysis are applied. Discrete wavelet transform (DWT) through Daubechies basis are done for identifying fluctuations over polynomial trends for clear characterization and differentiation of tissues. A systematic investigation of denoised images is carried out through the continuous Morlet wavelet. The scalogram reveals the changes in coefficient peak values from grade-I to grade-III. Wavelet normalized energy plots are computed in order to show the difference of periodicity among different grades of cancerous tissues. Using the multi-fractal de-trended fluctuation analysis (MFDFA), it is observed that the values of Hurst exponent and width of singularity spectrum decrease as cancer progresses from grade-I to grade-III tissue.
NASA Technical Reports Server (NTRS)
Jaggi, S.; Quattrochi, Dale A.; Lam, Nina S.-N.
1993-01-01
Fractal geometry is increasingly becoming a useful tool for modeling natural phenomena. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. The purpose of this paper is to introduce and implement three algorithms in C code for deriving fractal measurement from remotely sensed data. These three methods are: the line-divider method, the variogram method, and the triangular prism method. Remote-sensing data acquired by NASA's Calibrated Airborne Multispectral Scanner (CAMS) are used to compute the fractal dimension using each of the three methods. These data were obtained as a 30 m pixel spatial resolution over a portion of western Puerto Rico in January 1990. A description of the three methods, their implementation in PC-compatible environment, and some results of applying these algorithms to remotely sensed image data are presented.
JPEG 2000 coding of image data over adaptive refinement grids
NASA Astrophysics Data System (ADS)
Gamito, Manuel N.; Dias, Miguel S.
2003-06-01
An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.
Adaptive coded aperture imaging: progress and potential future applications
NASA Astrophysics Data System (ADS)
Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.
2011-09-01
Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).
Coded aperture imaging for fluorescent x-rays
Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.
2014-06-15
We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.
Imaging The Genetic Code of a Virus
NASA Astrophysics Data System (ADS)
Graham, Jenna; Link, Justin
2013-03-01
Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University
Waliszewski, Przemyslaw
2016-01-01
The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken’s embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed
Waliszewski, Przemyslaw
2016-01-01
The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken's embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
Coded aperture design in mismatched compressive spectral imaging.
Galvis, Laura; Arguello, Henry; Arce, Gonzalo R
2015-11-20
Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique. PMID:26836551
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-04-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
Hybrid Compton camera/coded aperture imaging system
Mihailescu, Lucian; Vetter, Kai M.
2012-04-10
A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.
Combined-transform coding scheme for medical images
NASA Astrophysics Data System (ADS)
Zhang, Ya-Qin; Loew, Murray H.; Pickholtz, Raymond L.
1991-06-01
Transform coding has been used successfully for radiological image compression in the picture archival and communication system (PACS) and other applications. However, it suffers from the artifact known as 'blocking effect' due to division of subblocks, which is very undesirable in the clinical environment. In this paper, we propose a combined-transform coding (CTC) scheme to reduce this effect and achieve better subjective performance. In the combined- transform coding scheme, we first divide the image into two sets that have different correlation properties, namely the upper image set (UIS) and lower image set (LIS). The UIS contains the most significant information and more correlation, and the LIS contains the less significant information. The UIS is compressed noiselessly without dividing into blocks and the LIS is coded by conventional block transform coding. Since the correlation in UIS is largely reduced (without distortion), the inter-block correlation, and hence the 'blocking effect,' is significantly reduced. This paper first describes the proposed CTC scheme and investigates its information-theoretic properties. Then, computer simulation results for a class of AP view chest x-ray images are presented. The comparison between the CTC scheme and conventional Discrete Cosine Transform (DCT) and Discrete Walsh-Hadmad Transform (DWHT) is made to demonstrate the performance improvement of the proposed scheme. The advantages of the proposed CTC scheme also include (1) no ringing effect due to no error propagation across the boundary, (2) no additional computation and (3) the ability to hold distortion below a certain threshold. In addition, we found that the idea of combined-coding can also be used in noiseless coding, and slight improvement in the compression performance can also be achieved if used properly. Finally, we point out that this scheme has its advantages in medical image transmission over a noisy channel or the packet-switched network in case of
Lung cancer-a fractal viewpoint.
Lennon, Frances E; Cianci, Gianguido C; Cipriani, Nicole A; Hensing, Thomas A; Zhang, Hannah J; Chen, Chin-Tu; Murgu, Septimiu D; Vokes, Everett E; Vannier, Michael W; Salgia, Ravi
2015-11-01
Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed. PMID:26169924
Lung cancer—a fractal viewpoint
Lennon, Frances E.; Cianci, Gianguido C.; Cipriani, Nicole A.; Hensing, Thomas A.; Zhang, Hannah J.; Chen, Chin-Tu; Murgu, Septimiu D.; Vokes, Everett E.; W. Vannier, Michael; Salgia, Ravi
2016-01-01
Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed. PMID:26169924
Split field coding: low complexity error-resilient entropy coding for image compression
NASA Astrophysics Data System (ADS)
Meany, James J.; Martens, Christopher J.
2008-08-01
In this paper, we describe split field coding, an approach for low complexity, error-resilient entropy coding which splits code words into two fields: a variable length prefix and a fixed length suffix. Once a prefix has been decoded correctly, then the associated fixed length suffix is error-resilient, with bit errors causing no loss of code word synchronization and only a limited amount of distortion on the decoded value. When the fixed length suffixes are segregated to a separate block, this approach becomes suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. Split field coding is demonstrated in the context of a wavelet-based image codec, with examples of various error resilience properties, and comparisons to the rate-distortion and computational performance of JPEG 2000.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Barker-coded excitation in ophthalmological ultrasound imaging
Zhou, Sheng; Wang, Xiao-Chun; Yang, Jun; Ji, Jian-Jun; Wang, Yan-Qun
2014-01-01
High-frequency ultrasound is an attractive means to obtain fine-resolution images of biological tissues for ophthalmologic imaging. To solve the tradeoff between axial resolution and detection depth, existing in the conventional single-pulse excitation, this study develops a new method which uses 13-bit Barker-coded excitation and a mismatched filter for high-frequency ophthalmologic imaging. A novel imaging platform has been designed after trying out various encoding methods. The simulation and experiment result show that the mismatched filter can achieve a much higher out signal main to side lobe which is 9.7 times of the matched one. The coded excitation method has significant advantages over the single-pulse excitation system in terms of a lower MI, a higher resolution, and a deeper detection depth, which improve the quality of ophthalmic tissue imaging. Therefore, this method has great values in scientific application and medical market. PMID:25356093
Image coding by way of wavelets
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1993-01-01
The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.
Investigation into How 8th Grade Students Define Fractals
ERIC Educational Resources Information Center
Karakus, Fatih
2015-01-01
The analysis of 8th grade students' concept definitions and concept images can provide information about their mental schema of fractals. There is limited research on students' understanding and definitions of fractals. Therefore, this study aimed to investigate the elementary students' definitions of fractals based on concept image and concept…
Multiple wavelet-tree-based image coding and robust transmission
NASA Astrophysics Data System (ADS)
Cao, Lei; Chen, Chang Wen
2004-10-01
In this paper, we present techniques based on multiple wavelet-tree coding for robust image transmission. The algorithm of set partitioning in hierarchical trees (SPIHT) is a state-of-the-art technique for image compression. This variable length coding (VLC) technique, however, is extremely sensitive to channel errors. To improve the error resilience capability and in the meantime to keep the high source coding efficiency through VLC, we propose to encode each wavelet tree or a group of wavelet trees using SPIHT algorithm independently. Instead of encoding the entire image as one bitstream, multiple bitstreams are generated. Therefore, error propagation is limited within individual bitstream. Two methods based on subsampling and human visual sensitivity are proposed to group the wavelet trees. The multiple bitstreams are further protected by the rate compatible puncture convolutional (RCPC) codes. Unequal error protection are provided for both different bitstreams and different bit segments inside each bitstream. We also investigate the improvement of error resilience through error resilient entropy coding (EREC) and wavelet tree coding when channels are slightly corruptive. A simple post-processing technique is also proposed to alleviate the effect of residual errors. We demonstrate through simulations that systems with these techniques can achieve much better performance than systems transmitting a single bitstream in noisy environments.
Distributed wavefront coding for wide angle imaging system
NASA Astrophysics Data System (ADS)
Larivière-Bastien, Martin; Zhang, Hu; Thibault, Simon
2011-10-01
The emerging paradigm of imaging systems, known as wavefront coding, which employs joint optimization of both the optical system and the digital post-processing system, has not only increased the degrees of design freedom but also brought several significant system-level benefits. The effectiveness of wavefront coding has been demonstrated by several proof-of-concept systems in the reduction of focus-related aberrations and extension of depth of focus. While previous research on wavefront coding was mainly targeted at imaging systems having a small or modest field of view (FOV), we present a preliminary study on wavefront coding applied to panoramic optical systems. Unlike traditional wavefront coding systems, which only require the constancy of the modulation transfer function (MTF) over an extended focus range, wavefront-coded panoramic systems particularly emphasize the mitigation of significant off-axis aberrations such as field curvature, coma, and astigmatism. The restrictions of using a traditional generalized cubic polynomial pupil phase mask for wide angle systems are studied in this paper. It is shown that a traditional approach can be used when the variation of the off-axis aberrations remains modest. Consequently, we propose to study how a distributed wavefront coding approach, where two surfaces are used for encoding the wavefront, can be applied to wide angle lenses. A few cases designed using Zemax are presented and discussed
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
Medical image registration using sparse coding of image patches.
Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid
2016-06-01
Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing. PMID:27085311
Construction of fractal nanostructures based on Kepler-Shubnikov nets
NASA Astrophysics Data System (ADS)
Ivanov, V. V.; Talanov, V. M.
2013-05-01
A system of information codes for deterministic fractal lattices and sets of multifractal curves is proposed. An iterative modular design was used to obtain a series of deterministic fractal lattices with generators in the form of fragments of 2 D structures and a series of multifractal curves (based on some Kepler-Shubnikov nets) having Cantor set properties. The main characteristics of fractal structures and their lacunar spectra are determined. A hierarchical principle is formulated for modules of regular fractal structures.
Construction of fractal nanostructures based on Kepler-Shubnikov nets
Ivanov, V. V. Talanov, V. M.
2013-05-15
A system of information codes for deterministic fractal lattices and sets of multifractal curves is proposed. An iterative modular design was used to obtain a series of deterministic fractal lattices with generators in the form of fragments of 2D structures and a series of multifractal curves (based on some Kepler-Shubnikov nets) having Cantor set properties. The main characteristics of fractal structures and their lacunar spectra are determined. A hierarchical principle is formulated for modules of regular fractal structures.
A Brief Historical Introduction to Fractals and Fractal Geometry
ERIC Educational Resources Information Center
Debnath, Lokenath
2006-01-01
This paper deals with a brief historical introduction to fractals, fractal dimension and fractal geometry. Many fractals including the Cantor fractal, the Koch fractal, the Minkowski fractal, the Mandelbrot and Given fractal are described to illustrate self-similar geometrical figures. This is followed by the discovery of dynamical systems and…
NASA Astrophysics Data System (ADS)
Kimura, Michio; Kuranishi, Makoto; Sukenobu, Yoshiharu; Watanabe, Hiroki; Nakajima, Takashi; Morimura, Shinya; Kabata, Shun
2002-05-01
The DICOM standard includes non-image data information such as image study ordering data and performed procedure data, which are used for sharing information between HIS/RIS/PACS/modalities, which is essential for IHE. In order to bring such parts of the DICOM standard into force in Japan, a joint committee of JIRA and JAHIS (vendor associations) established JJ1017 management guideline. It specifies, for example, which items are legally required in Japan while remaining optional in the DICOM standard. Then, what should be used for the examination type, regional, and directional codes? Our investigation revealed that DICOM tables do not include items that are sufficiently detailed for use in Japan. This is because radiology departments (radiologists) in the US exercise greater discretion in image examination than in Japan, and the contents of orders from requesting physicians do not include the extra details used in Japan. Therefore, we have generated the JJ1017 code for these 3 codes for use based on the JJ1017 guidelines. The stem part of the JJ1017 code partially employs the DICOM codes in order to remain in line with the DICOM standard. JJ1017 codes are to be included not only in IHE-J specifications, also in Ministry recommendations of health data exchange.
Coding of images by methods of a spline interpolation
NASA Astrophysics Data System (ADS)
Kozhemyako, Vladimir P.; Maidanuik, V. P.; Etokov, I. A.; Zhukov, Konstantin M.; Jorban, Saleh R.
2000-06-01
In the case of image coding are containing interpolation methods, a linear methods of component forming usually used. However, taking in account the huge speed increasing of a computer and hardware integration power, of special interest was more complicated interpolation methods, in particular spline interpolation. A spline interpolation is known to be a approximation that performed by spline, which consist of polynomial bounds, where a cub parabola usually used. At this article is to perform image analysis by 5 X 5 aperture, result in count rejection of low-frequence component of image: an one base count per 5 X 5 size fragment. The passed source counts were restoring by spline interpolation methods, then formed counts of high-frequence image component, by subtract from counts of initial image a low-frequence component and their quantization. At the final stage Huffman coding performed to divert of statistical redundancy. Spacious set of experiments with various images showed that source compression factor may be founded into limits of 10 - 70, which for majority test images are superlative source compression factor by JPEG standard applications at the same image quality. Investigated research show that spline approximation allow to improve restored image quality and compression factor to compare with linear interpolation. Encoding program modules has work out for BMP-format files, on the Windows and MS-DOS platforms.
Investigation of the image coding method for three-dimensional range-gated imaging
NASA Astrophysics Data System (ADS)
Laurenzis, Martin; Bacher, Emmanuel; Schertzer, Stéphane; Christnacher, Frank
2011-11-01
In this publication we investigate the image coding method for 3D range-gated imaging. This method is based on multiple exposure of range-gated images to enable a coding of ranges in a limited number of images. For instance, it is possible to enlarge the depth mapping range by a factor of 12 by the utilization of 3 images and specific 12T image coding sequences. Further, in this paper we present a node-model to determine the coding sequences and to dramatically reduce the time of calculation of the number of possible sequences. Finally, we demonstrate and discuss the application of 12T sequences with different clock periods T = 200 ns to 400 ns.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
Compressing industrial computed tomography images by means of contour coding
NASA Astrophysics Data System (ADS)
Jiang, Haina; Zeng, Li
2013-10-01
An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.
Digital Image Analysis for DETCHIP(®) Code Determination.
Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby
2012-08-01
DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940
NASA Astrophysics Data System (ADS)
Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Garazdiuk, M.; Motrich, A. V.
2014-08-01
This research presents the results of investigation of laser polarization fluorescence of biological layers (histological sections of the myocardium). The polarized structure of autofluorescence imaging layers of biological tissues was detected and investigated. Proposed the model of describing the formation of polarization inhomogeneous of autofluorescence imaging biological optically anisotropic layers. On this basis, analytically and experimentally tested to justify the method of laser polarimetry autofluorescent. Analyzed the effectiveness of this method in the postmortem diagnosis of infarction. The objective criteria (statistical moments) of differentiation of autofluorescent images of histological sections myocardium were defined. The operational characteristics (sensitivity, specificity, accuracy) of these technique were determined.
NASA Astrophysics Data System (ADS)
Wuorinen, Charles
2015-03-01
Any of the arts may produce exemplars that have fractal characteristics. There may be fractal painting, fractal poetry, and the like. But these will always be specific instances, not necessarily displaying intrinsic properties of the art-medium itself. Only music, I believe, of all the arts possesses an intrinsically fractal character, so that its very nature is fractally determined. Thus, it is reasonable to assert that any instance of music is fractal...
Skeleton-based morphological coding of binary images.
Kresch, R; Malah, D
1998-01-01
This paper presents new properties of the discrete morphological skeleton representation of binary images, along with a novel coding scheme for lossless binary image compression that is based on these properties. Following a short review of the theoretical background, two sets of new properties of the discrete morphological skeleton representation of binary images are proved. The first one leads to the conclusion that only the radii of skeleton points belonging to a subset of the ultimate erosions are needed for perfect reconstruction. This corresponds to a lossless sampling of the quench function. The second set of new properties is related to deterministic prediction of skeletonal information in a progressive transmission scheme. Based on the new properties, a novel coding scheme for binary images is presented. The proposed scheme is suitable for progressive transmission and fast implementation. Computer simulations, also presented, show that the proposed coding scheme substantially improves the results obtained by previous skeleton-based coders, and performs better than classical coders, including run-length/Huffman, quadtree, and chain coders. For facsimile images, its performance can be placed between the modified read (MR) method (K=4) and modified modified read (MMR) method. PMID:18276206
Fractal signatures in the aperiodic Fibonacci grating.
Verma, Rupesh; Banerjee, Varsha; Senthilkumaran, Paramasivam
2014-05-01
The Fibonacci grating (FbG) is an archetypal example of aperiodicity and self-similarity. While aperiodicity distinguishes it from a fractal, self-similarity identifies it with a fractal. Our paper investigates the outcome of these complementary features on the FbG diffraction profile (FbGDP). We find that the FbGDP has unique characteristics (e.g., no reduction in intensity with increasing generations), in addition to fractal signatures (e.g., a non-integer fractal dimension). These make the Fibonacci architecture potentially useful in image forming devices and other emerging technologies. PMID:24784044
Interplay between JPEG-2000 image coding and quality estimation
NASA Astrophysics Data System (ADS)
Pinto, Guilherme O.; Hemami, Sheila S.
2013-03-01
Image quality and utility estimators aspire to quantify the perceptual resemblance and the usefulness of a distorted image when compared to a reference natural image, respectively. Image-coders, such as JPEG-2000, traditionally aspire to allocate the available bits to maximize the perceptual resemblance of the compressed image when compared to a reference uncompressed natural image. Specifically, this can be accomplished by allocating the available bits to minimize the overall distortion, as computed by a given quality estimator. This paper applies five image quality and utility estimators, SSIM, VIF, MSE, NICE and GMSE, within a JPEG-2000 encoder for rate-distortion optimization to obtain new insights on how to improve JPEG-2000 image coding for quality and utility applications, as well as to improve the understanding about the quality and utility estimators used in this work. This work develops a rate-allocation algorithm for arbitrary quality and utility estimators within the Post- Compression Rate-Distortion Optimization (PCRD-opt) framework in JPEG-2000 image coding. Performance of the JPEG-2000 image coder when used with a variety of utility and quality estimators is then assessed. The estimators fall into two broad classes, magnitude-dependent (MSE, GMSE and NICE) and magnitudeindependent (SSIM and VIF). They further differ on their use of the low-frequency image content in computing their estimates. The impact of these computational differences is analyzed across a range of images and bit rates. In general, performance of the JPEG-2000 coder below 1.6 bits/pixel with any of these estimators is highly content dependent, with the most relevant content being the amount of texture in an image and whether the strongest gradients in an image correspond to the main contours of the scene. Above 1.6 bits/pixel, all estimators produce visually equivalent images. As a result, the MSE estimator provides the most consistent performance across all images, while specific
Image Coding By Vector Quantization In A Transformed Domain
NASA Astrophysics Data System (ADS)
Labit, C.; Marescq, J. P...
1986-05-01
Using vector quantization in a transformed domain, TV images are coded. The method exploit spatial redundancies of small 4x4 blocks of pixel : first, a DCT (or Hadamard) trans-form is performed on these blocks. A classification algorithm ranks them into visual and transform properties-based classes. For each class, high energy carrying coefficients are retained and using vector quantization, a codebook is built for the AC remaining part of the transformed blocks. The whole of the codeworks are referenced by an index. Each block is then coded by specifying its DC coefficient and associated index.
Colored coded-apertures for spectral image unmixing
NASA Astrophysics Data System (ADS)
Vargas, Hector M.; Arguello Fuentes, Henry
2015-10-01
Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.
High transparency coded apertures in planar nuclear medicine imaging.
Starfield, David M; Rubin, David M; Marwala, Tshilidzi
2007-01-01
Coded apertures provide an alternative to the collimators of nuclear medicine imaging, and advances in the field have lessened the artifacts that are associated with the near-field geometry. Thickness of the aperture material, however, results in a decoded image with thickness artifacts, and constrains both image resolution and the available manufacturing techniques. Thus in theory, thin apertures are clearly desirable, but high transparency leads to a loss of contrast in the recorded data. Coupled with the quantization effects of detectors, this leads to significant noise in the decoded image. This noise must be dependent on the bit-depth of the gamma camera. If there are a sufficient number of measurable values, high transparency need not adversely affect the signal-to-noise ratio. This novel hypothesis is tested by means of a ray-tracing computer simulator. The simulation results presented in the paper show that replacing a highly opaque coded aperture with a highly transparent aperture, simulated with an 8-bit gamma camera, worsens the root-mean-square error measurement. However, when simulated with a 16-bit gamma camera, a highly transparent coded aperture significantly reduces both thickness artifacts and the root-mean-square error measurement. PMID:18002997
Depth-controlled 3D TV image coding
NASA Astrophysics Data System (ADS)
Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo
1998-04-01
Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.
Microbialites on Mars: a fractal analysis of the Athena's microscopic images
NASA Astrophysics Data System (ADS)
Bianciardi, G.; Rizzo, V.; Cantasano, N.
2015-10-01
The Mars Exploration Rovers investigated Martian plains where laminated sedimentary rocks are present. The Athena morphological investigation [1] showed microstructures organized in intertwined filaments of microspherules: a texture we have also found on samples of terrestrial (biogenic) stromatolites and other microbialites and not on pseudo-abiogenicstromatolites. We performed a quantitative image analysis in order to compare 50 microbialites images with 50 rovers (Opportunity and Spirit) ones (approximately 30,000/30,000 microstructures). Contours were extracted and morphometric indexes obtained: geometric and algorithmic complexities, entropy, tortuosity, minimum and maximum diameters. Terrestrial and Martian textures resulted multifractals. Mean values and confidence intervals from the Martian images overlapped perfectly with those from terrestrial samples. The probability of this occurring by chance was less than 1/28, p<0.004. Our work show the evidence of a widespread presence of microbialites in the Martian outcroppings: i.e., the presence of unicellular life on the ancient Mars, when without any doubt, liquid water flowed on the Red Planet.
Fractal Characterization of Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Qiu, Hon-Iie; Lam, Nina Siu-Ngan; Quattrochi, Dale A.; Gamon, John A.
1999-01-01
Two Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral images selected from the Los Angeles area, one representing urban and the other, rural, were used to examine their spatial complexity across their entire spectrum of the remote sensing data. Using the ICAMS (Image Characterization And Modeling System) software, we computed the fractal dimension values via the isarithm and triangular prism methods for all 224 bands in the two AVIRIS scenes. The resultant fractal dimensions reflect changes in image complexity across the spectral range of the hyperspectral images. Both the isarithm and triangular prism methods detect unusually high D values on the spectral bands that fall within the atmospheric absorption and scattering zones where signature to noise ratios are low. Fractal dimensions for the urban area resulted in higher values than for the rural landscape, and the differences between the resulting D values are more distinct in the visible bands. The triangular prism method is sensitive to a few random speckles in the images, leading to a lower dimensionality. On the contrary, the isarithm method will ignore the speckles and focus on the major variation dominating the surface, thus resulting in a higher dimension. It is seen where the fractal curves plotted for the entire bandwidth range of the hyperspectral images could be used to distinguish landscape types as well as for screening noisy bands.
Coded aperture imaging with a HURA coded aperture and a discrete pixel detector
NASA Astrophysics Data System (ADS)
Byard, Kevin
An investigation into the gamma ray imaging properties of a hexagonal uniformly redundant array (HURA) coded aperture and a detector consisting of discrete pixels constituted the major research effort. Such a system offers distinct advantages for the development of advanced gamma ray astronomical telescopes in terms of the provision of high quality sky images in conjunction with an imager plane which has the capacity to reject background noise efficiently. Much of the research was performed as part of the European Space Agency (ESA) sponsored study into a prospective space astronomy mission, GRASP. The effort involved both computer simulations and a series of laboratory test images. A detailed analysis of the system point spread function (SPSF) of imaging planes which incorporate discrete pixel arrays is presented and the imaging quality quantified in terms of the signal to noise ratio (SNR). Computer simulations of weak point sources in the presence of detector background noise were also investigated. Theories developed during the study were evaluated by a series of experimental measurements with a Co-57 gamma ray point source, an Anger camera detector, and a rotating HURA mask. These tests were complemented by computer simulations designed to reproduce, as close as possible, the experimental conditions. The 60 degree antisymmetry property of HURA's was also employed to remove noise due to detector systematic effects present in the experimental images, and rendered a more realistic comparison of the laboratory tests with the computer simulations. Plateau removal and weighted deconvolution techniques were also investigated as methods for the reduction of the coding error noise associated with the gamma ray images.
Investigating Lossy Image Coding Using the PLHaar Transform
Senecal, J G; Lindstrom, P; Duchaineau, M A; Joy, K I
2004-11-16
We developed the Piecewise-Linear Haar (PLHaar) transform, an integer wavelet-like transform. PLHaar does not have dynamic range expansion, i.e. it is an n-bit to n-bit transform. To our knowledge PLHaar is the only reversible n-bit to n-bit transform that is suitable for lossy and lossless coding. We are investigating PLHaar's use in lossy image coding. Preliminary results from thresholding transform coefficients show that PLHaar does not produce objectionable artifacts like prior n-bit to n-bit transforms, such as the transform of Chao et al. (CFH). Also, at lower bitrates PLHaar images have increased contrast. For a given set of CFH and PLHaar coefficients with equal entropy, the PLHaar reconstruction is more appealing, although the PSNR may be lower.
Preconditioning for multiplexed imaging with spatially coded PSFs.
Horisaki, Ryoichi; Tanida, Jun
2011-06-20
We propose a preconditioning method to improve the convergence of iterative reconstruction algorithms in multiplexed imaging based on convolution-based compressive sensing with spatially coded point spread functions (PSFs). The system matrix is converted to improve the condition number with a preconditioner matrix. The preconditioner matrix is calculated by Tikhonov regularization in the frequency domain. The method was demonstrated with simulations and an experiment involving a range detection system with a grating based on the multiplexed imaging framework. The results of the demonstrations showed improved reconstruction fidelity by using the proposed preconditioning method. PMID:21716495
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
157km BOTDA with pulse coding and image processing
NASA Astrophysics Data System (ADS)
Qian, Xianyang; Wang, Zinan; Wang, Song; Xue, Naitian; Sun, Wei; Zhang, Li; Zhang, Bin; Rao, Yunjiang
2016-05-01
A repeater-less Brillouin optical time-domain analyzer (BOTDA) with 157.68km sensing range is demonstrated, using the combination of random fiber laser Raman pumping and low-noise laser-diode-Raman pumping. With optical pulse coding (OPC) and Non Local Means (NLM) image processing, temperature sensing with +/-0.70°C uncertainty and 8m spatial resolution is experimentally demonstrated. The image processing approach has been proved to be compatible with OPC, and it further increases the figure-of-merit (FoM) of the system by 57%.
Correlated Statistical Uncertainties in Coded-Aperture Imaging
Fleenor, Matthew C; Blackston, Matthew A; Ziock, Klaus-Peter
2014-01-01
In nuclear security applications, coded-aperture imagers provide the opportu- nity for a wealth of information regarding the attributes of both the radioac- tive and non-radioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be deter- mined in a quantitative and statistically meaningful manner. To address the deficiency of quantifiable errors in coded-aperture imaging, we present uncer- tainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti- mask data are included. Utilizing simulated point source data, we found that correlations (and inverse correlations) arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations (and their inverses) was heightened by the process of over-sampling, while correla- tions were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explore how statistics-based alarming in nuclear security is impacted.
Correlated statistical uncertainties in coded-aperture imaging
NASA Astrophysics Data System (ADS)
Fleenor, Matthew C.; Blackston, Matthew A.; Ziock, Klaus P.
2015-06-01
In nuclear security applications, coded-aperture imagers can provide a wealth of information regarding the attributes of both the radioactive and nonradioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be determined in a quantitative and statistically meaningful manner. To address a deficiency of quantifiable errors in coded-aperture imaging, we present uncertainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti-mask data are included. Utilizing simulated point source data, we found that correlations arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations was heightened by the process of over-sampling, while correlations were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explored how statistics-based alarming is impacted in a radiological search scenario.
Techniques for region coding in object-based image compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2004-01-01
Object-based compression (OBC) is an emerging technology that combines region segmentation and coding to produce a compact representation of a digital image or video sequence. Previous research has focused on a variety of segmentation and representation techniques for regions that comprise an image. The author has previously suggested [1] partitioning of the OBC problem into three steps: (1) region segmentation, (2) region boundary extraction and compression, and (3) region contents compression. A companion paper [2] surveys implementationally feasible techniques for boundary compression. In this paper, we analyze several strategies for region contents compression, including lossless compression, lossy VPIC, EPIC, and EBLAST compression, wavelet-based coding (e.g., JPEG-2000), as well as texture matching approaches. This paper is part of a larger study that seeks to develop highly efficient compression algorithms for still and video imagery, which would eventually support automated object recognition (AOR) and semantic lookup of images in large databases or high-volume OBC-format datastreams. Example applications include querying journalistic archives, scientific or medical imaging, surveillance image processing and target tracking, as well as compression of video for transmission over the Internet. Analysis emphasizes time and space complexity, as well as sources of reconstruction error in decompressed imagery.
Optimization of a coded aperture coherent scatter spectral imaging system for medical imaging
NASA Astrophysics Data System (ADS)
Greenberg, Joel A.; Lakshmanan, Manu N.; Brady, David J.; Kapadia, Anuj J.
2015-03-01
Coherent scatter X-ray imaging is a technique that provides spatially-resolved information about the molecular structure of the material under investigation, yielding material-specific contrast that can aid medical diagnosis and inform treatment. In this study, we demonstrate a coherent-scatter imaging approach based on the use of coded apertures (known as coded aperture coherent scatter spectral imaging1, 2) that enables fast, dose-efficient, high-resolution scatter imaging of biologically-relevant materials. Specifically, we discuss how to optimize a coded aperture coherent scatter imaging system for a particular set of objects and materials, describe and characterize our experimental system, and use the system to demonstrate automated material detection in biological tissue.
Hyperspectral pixel classification from coded-aperture compressive imaging
NASA Astrophysics Data System (ADS)
Ramirez, Ana; Arce, Gonzalo R.; Sadler, Brian M.
2012-06-01
This paper describes a new approach and its associated theoretical performance guarantees for supervised hyperspectral image classification from compressive measurements obtained by a Coded Aperture Snapshot Spectral Imaging System (CASSI). In one snapshot, the two-dimensional focal plane array (FPA) in the CASSI system captures the coded and spectrally dispersed source field of a three-dimensional data cube. Multiple snapshots are used to construct a set of compressive spectral measurements. The proposed approach is based on the concept that each pixel in the hyper-spectral image lies in a low-dimensional subspace obtained from the training samples, and thus it can be represented as a sparse linear combination of vectors in the given subspace. The sparse vector representing the test pixel is then recovered from the set of compressive spectral measurements and it is used to determine the class label of the test pixel. The theoretical performance bounds of the classifier exploit the distance preservation condition satisfied by the multiple shot CASSI system and depend on the number of measurements collected, code aperture pattern, and similarity between spectral signatures in the dictionary. Simulation experiments illustrate the performance of the proposed classification approach.
Quantization table design revisited for image/video coding.
Yang, En-Hui; Sun, Chang; Meng, Jin
2014-11-01
Quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches, where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior. Guided by this new design principle, we propose an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image coding. When applied to standard JPEG encoding, it provides more than 1.5-dB performance gain in PSNR, with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5-dB gain in PSNR with computational complexity reduced by a factor of more than 2000 when SDQ is OFF, and a 0.2-dB performance gain or more with 85% of the complexity reduced when SDQ is ON. Significant compression performance improvement is also seen when the algorithm is applied to other image coding systems proposed in the literature. PMID:25248184
Multiscale differential fractal feature with application to target detection
NASA Astrophysics Data System (ADS)
Shi, Zelin; Wei, Ying; Huang, Shabai
2004-07-01
A multiscale differential fractal feature of an image is proposed and a small target detection method from complex nature clutter is presented. Considering the speciality that the fractal features of man-made objects change much more violently than that of nature's when the scale is varied, fractal features at multiple scales used for distinguishing man-made target from nature clutter should have more advantages over standard fractal dimensions. Multiscale differential fractal dimensions are deduced from typical fractal model and standard covering-blanket method is improved and used to estimate multiscale fractal dimensions. A multiscale differential fractal feature is defined as the variation of fractal dimensions between two scales at a rational scale range. It can stand out the fractal feature of man-made object from natural clutters much better than the fractal dimension by standard covering-blanket method. Meanwhile, the calculation and the storage amount are reduced greatly, they are 4/M and 2/M that of the standard covering-blanket method respectively (M is scale). In the image of multiscale differential fractal feature, local gray histogram statistical method is used for target detection. Experiment results indicate that this method is suitable for both kinds background of land and sea. It also can be appropriate in both kinds of infrared and TV images, and can detect small targets from a single frame correctly. This method is with high speed and is easy to be implemented.
Magnetohydrodynamics of fractal media
Tarasov, Vasily E.
2006-05-15
The fractal distribution of charged particles is considered. An example of this distribution is the charged particles that are distributed over the fractal. The fractional integrals are used to describe fractal distribution. These integrals are considered as approximations of integrals on fractals. Typical turbulent media could be of a fractal structure and the corresponding equations should be changed to include the fractal features of the media. The magnetohydrodynamics equations for fractal media are derived from the fractional generalization of integral Maxwell equations and integral hydrodynamics (balance) equations. Possible equilibrium states for these equations are considered.
Fractal vector optical fields.
Pan, Yue; Gao, Xu-Zhen; Cai, Meng-Qiang; Zhang, Guan-Lin; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian
2016-07-15
We introduce the concept of a fractal, which provides an alternative approach for flexibly engineering the optical fields and their focal fields. We propose, design, and create a new family of optical fields-fractal vector optical fields, which build a bridge between the fractal and vector optical fields. The fractal vector optical fields have polarization states exhibiting fractal geometry, and may also involve the phase and/or amplitude simultaneously. The results reveal that the focal fields exhibit self-similarity, and the hierarchy of the fractal has the "weeding" role. The fractal can be used to engineer the focal field. PMID:27420485
Wavelength and code-division multiplexing in diffuse optical imaging
NASA Astrophysics Data System (ADS)
Ascari, Luca; Berrettini, Gianluca; Iannaccone, Sandro; Giacalone, Matteo; Contini, Davide; Spinelli, Lorenzo; Trivella, Maria Giovanna; L'Abbate, Antonio; Potí, Luca
2011-02-01
We recently applied time domain near infrared diffuse optical spectroscopy (TD-NIRS) to monitor hemodynamics of the cardiac wall (oxy and desoxyhemoglobin concentration, saturation, oedema) on anesthetized swine models. Published results prove that NIRS signal can provide information on myocardial hemodynamic parameters not obtainable with conventional diagnostic clinical tools.1 Nevertheless, the high cost of equipment, acquisition length, sensitivity to ambient light are factors limiting its clinical adoption. This paper introduces a novel approach, based on the use of wavelength and code division multiplexing, applicable to TD-NIRS as well as diffuse optical imaging systems (both topography and tomography); the approach, called WS-CDM (wavelength and space code division mltiplexing), essentially consists of a double stage intensity modulation of multiwavelength CW laser sources using orthogonal codes and their parallel correlation-based decoding after propagation in the tissue; it promises better signal to noise ratio (SNR), higher acquisition speed, robustness to ambient light and lower costs compared to both the conventional systems and the more recent spread spectrum approach based on single modulation with pseudo-random bit sequences (PRBS).2 Parallel acquisition of several wavelengths and from several locations is achievable. TD-NIRS experimental results guided Matlab-based simulations aimed at correlating different coding sequences, lengths, spectrum spreading factor, with the WS-CDM performances on such tissues (achievable SNR, acquisition and reconstruction speed, robustness to channel inequalization, ...). Simulations results and preliminary experimental validation confirm the significant improvements that WS-CDM could bring to diffuse optical imaging (not limited to cardiac functional imaging).
Fractal Structure in Human Cerebellum Measured by MRI
NASA Astrophysics Data System (ADS)
Zhang, Luduan; Yue, Guang; Brown, Robert; Liu, Jingzhi
2003-10-01
Fractal dimension has been used to quantify the structures of a wide range of objects in biology and medicine. We measured fractal dimension of human cerebellum (CB) in magnetic resonance images of 24 healthy young subjects (12 men, 12 women). CB images were resampled to a series of image sets with different three-dimensional resolutions. At each resolution, the skeleton of the CB white matter was obtained and the number of pixels belonging to the skeleton was determined. Fractal dimension of the CB skeleton was calculated using the box-counting method. The results indicated that the CB skeleton is a highly fractal structure, with a fractal dimension of 2.57+/-0.01. No significant difference in the CB fractal dimension was observed between men and women. Fractal dimension may serve as a quantitative index for structural complexity of the CB at its developmental, degenerative, or evolutionary stages.
Coded-aperture Raman imaging for standoff explosive detection
NASA Astrophysics Data System (ADS)
McCain, Scott T.; Guenther, B. D.; Brady, David J.; Krishnamurthy, Kalyani; Willett, Rebecca
2012-06-01
This paper describes the design of a deep-UV Raman imaging spectrometer operating with an excitation wavelength of 228 nm. The designed system will provide the ability to detect explosives (both traditional military explosives and home-made explosives) from standoff distances of 1-10 meters with an interrogation area of 1 mm x 1 mm to 200 mm x 200 mm. This excitation wavelength provides resonant enhancement of many common explosives, no background fluorescence, and an enhanced cross-section due to the inverse wavelength scaling of Raman scattering. A coded-aperture spectrograph combined with compressive imaging algorithms will allow for wide-area interrogation with fast acquisition rates. Coded-aperture spectral imaging exploits the compressibility of hyperspectral data-cubes to greatly reduce the amount of acquired data needed to interrogate an area. The resultant systems are able to cover wider areas much faster than traditional push-broom and tunable filter systems. The full system design will be presented along with initial data from the instrument. Estimates for area scanning rates and chemical sensitivity will be presented. The system components include a solid-state deep-UV laser operating at 228 nm, a spectrograph consisting of well-corrected refractive imaging optics and a reflective grating, an intensified solar-blind CCD camera, and a high-efficiency collection optic.
Sparse-Coding-Based Computed Tomography Image Reconstruction
Yoon, Gang-Joon
2013-01-01
Computed tomography (CT) is a popular type of medical imaging that generates images of the internal structure of an object based on projection scans of the object from several angles. There are numerous methods to reconstruct the original shape of the target object from scans, but they are still dependent on the number of angles and iterations. To overcome the drawbacks of iterative reconstruction approaches like the algebraic reconstruction technique (ART), while the recovery is slightly impacted from a random noise (small amount of ℓ2 norm error) and projection scans (small amount of ℓ1 norm error) as well, we propose a medical image reconstruction methodology using the properties of sparse coding. It is a very powerful matrix factorization method which each pixel point is represented as a linear combination of a small number of basis vectors. PMID:23576898
Two-Sided Coded Aperture Imaging Without a Detector Plane
Ziock, Klaus-Peter; Cunningham, Mark F; Fabris, Lorenzo
2009-01-01
We introduce a novel design for a two-sided, coded-aperture, gamma-ray imager suitable for use in stand off detection of orphan radioactive sources. The design is an extension of an active-mask imager that would have three active planes of detector material, a central plane acting as the detector for two (active) coded-aperture mask planes, one on either side of the detector plane. In the new design the central plane is removed and the mask on the left (right) serves as the detector plane for the mask on the right (left). This design reduces the size, mass, complexity, and cost of the overall instrument. In addition, if one has fully position-sensitive detectors, then one can use the two planes as a classic Compton camera. This enhances the instrument's sensitivity at higher energies where the coded-aperture efficiency is decreased by mask penetration. A plausible design for the system is found and explored with Monte Carlo simulations.
Iterative image coding with overcomplete complex wavelet transforms
NASA Astrophysics Data System (ADS)
Kingsbury, Nick G.; Reeves, Tanya
2003-06-01
Overcomplete transforms, such as the Dual-Tree Complex Wavelet Transform, can offer more flexible signal representations than critically-sampled transforms such as the Discrete Wavelet Transform. However the process of selecting the optimal set of coefficients to code is much more difficult because many different sets of transform coefficients can represent the same decoded image. We show that large numbers of transform coefficients can be set to zero without much reconstruction quality loss by forcing compensatory changes in the remaining coefficients. We develop a system for achieving these coding aims of coefficient elimination and compensation, based on iterative projection of signals between the image domain and transform domain with a non-linear process (e.g.~centre-clipping or quantization) applied in the transform domain. The convergence properties of such non-linear feedback loops are discussed and several types of non-linearity are proposed and analyzed. The compression performance of the overcomplete scheme is compared with that of the standard Discrete Wavelet Transform, both objectively and subjectively, and is found to offer advantages of up to 0.65 dB in PSNR and significant reduction in visibility of some types of coding artifacts.
Block-based conditional entropy coding for medical image compression
NASA Astrophysics Data System (ADS)
Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng
2003-05-01
In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert
1998-01-01
MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.
Some Important Observations Concerning Human Visual Image Coding
NASA Astrophysics Data System (ADS)
Overington, Ian
1986-05-01
During some 20 years of research into thresholds of visual performance we have required to explore deeply the developing knowledge in both physiology, neurophysiology and, to a lesser extent, anatomy of primate vision. Over the last few years, as interest in computer vision has grown, it has become clear to us that a number of aspects of image processing and coding by human vision are very simple yet powerful, but appear to have been largely overlooked or misrepresented in classical computer vision literature. The paper discusses some important aspects of early visual processing. It then endeavours to demonstrate some of the simple yet powerful coding procedures which we believe are or may be used by human vision and which may be applied directly to computer vision.
An adaptive algorithm for motion compensated color image coding
NASA Technical Reports Server (NTRS)
Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming
1987-01-01
This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.
HD Photo: a new image coding technology for digital photography
NASA Astrophysics Data System (ADS)
Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.
2007-09-01
This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.
Block-based embedded color image and video coding
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Pearlman, William A.; Islam, Asad
2004-01-01
Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.
Fractals in art and nature: why do we like them?
NASA Astrophysics Data System (ADS)
Spehar, Branka; Taylor, Richard P.
2013-03-01
Fractals have experienced considerable success in quantifying the visual complexity exhibited by many natural patterns, and continue to capture the imagination of scientists and artists alike. Fractal patterns have also been noted for their aesthetic appeal, a suggestion further reinforced by the discovery that the poured patterns of the American abstract painter Jackson Pollock are also fractal, together with the findings that many forms of art resemble natural scenes in showing scale-invariant, fractal-like properties. While some have suggested that fractal-like patterns are inherently pleasing because they resemble natural patterns and scenes, the relation between the visual characteristics of fractals and their aesthetic appeal remains unclear. Motivated by our previous findings that humans display a consistent preference for a certain range of fractal dimension across fractal images of various types we turn to scale-specific processing of visual information to understand this relationship. Whereas our previous preference studies focused on fractal images consisting of black shapes on white backgrounds, here we extend our investigations to include grayscale images in which the intensity variations exhibit scale invariance. This scale-invariance is generated using a 1/f frequency distribution and can be tuned by varying the slope of the rotationally averaged Fourier amplitude spectrum. Thresholding the intensity of these images generates black and white fractals with equivalent scaling properties to the original grayscale images, allowing a direct comparison of preferences for grayscale and black and white fractals. We found no significant differences in preferences between the two groups of fractals. For both set of images, the visual preference peaked for images with the amplitude spectrum slopes from 1.25 to 1.5, thus confirming and extending the previously observed relationship between fractal characteristics of images and visual preference.
Coded-aperture Compton camera for gamma-ray imaging
NASA Astrophysics Data System (ADS)
Farber, Aaron M.
This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.
Simulation model of the fractal patterns in ionic conducting polymer films
NASA Astrophysics Data System (ADS)
Amir, Shahizat; Mohamed, Nor; Hashim Ali, Siti
2010-02-01
Normally polymer electrolyte membranes are prepared and studied for applications in electrochemical devices. In this work, polymer electrolyte membranes have been used as the media to culture fractals. In order to simulate the growth patterns and stages of the fractals, a model has been identified based on the Brownian motion theory. A computer coding has been developed for the model to simulate and visualize the fractal growth. This computer program has been successful in simulating the growth of the fractal and in calculating the fractal dimension of each of the simulated fractal patterns. The fractal dimensions of the simulated fractals are comparable with the values obtained in the original fractals observed in the polymer electrolyte membrane. This indicates that the model developed in the present work is within acceptable conformity with the original fractal.
Coded aperture imaging with self-supporting uniformly redundant arrays
Fenimore, Edward E.
1983-01-01
A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.
Recursive time-varying filter banks for subband image coding
NASA Technical Reports Server (NTRS)
Smith, Mark J. T.; Chung, Wilson C.
1992-01-01
Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.
Phase transfer function based method to alleviate image artifacts in wavefront coding imaging system
NASA Astrophysics Data System (ADS)
Mo, Xutao; Wang, Jinjiang
2013-09-01
Wavefront coding technique can extend the depth of filed (DOF) of the incoherent imaging system. Several rectangular separable phase masks (such as cubic type, exponential type, logarithmic type, sinusoidal type, rational type, et al) have been proposed and discussed, because they can extend the DOF up to ten times of the DOF of ordinary imaging system. But according to the research on them, researchers have pointed out that the images are damaged by the artifacts, which usually come from the non-linear phase transfer function (PTF) differences between the PTF used in the image restoration filter and the PTF related to real imaging condition. In order to alleviate the image artifacts in imaging systems with wavefront coding, an optimization model based on the PTF was proposed to make the PTF invariance with the defocus. Thereafter, an image restoration filter based on the average PTF in the designed depth of field was introduced along with the PTF-based optimization. The combination of the optimization and the image restoration proposed can alleviate the artifacts, which was confirmed by the imaging simulation of spoke target. The cubic phase mask (CPM) and exponential phase mask (EPM) were discussed as example.
Categorizing biomedicine images using novel image features and sparse coding representation
2013-01-01
Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are
A segmentation-based lossless image coding method for high-resolution medical image compression.
Shen, L; Rangayyan, R M
1997-06-01
Lossless compression techniques are essential in archival and communication of medical images. In this paper, a new segmentation-based lossless image coding (SLIC) method is proposed, which is based on a simple but efficient region growing procedure. The embedded region growing procedure produces an adaptive scanning pattern for the image with the help of a very-few-bits-needed discontinuity index map. Along with this scanning pattern, an error image data part with a very small dynamic range is generated. Both the error image data and the discontinuity index map data parts are then encoded by the Joint Bi-level Image experts Group (JBIG) method. The SLIC method resulted in, on the average, lossless compression to about 1.6 h/pixel from 8 b, and to about 2.9 h/pixel from 10 b with a database of ten high-resolution digitized chest and breast images. In comparison with direct coding by JBIG, Joint Photographic Experts Group (JPEG), hierarchical interpolation (HINT), and two-dimensional Burg Prediction plus Huffman error coding methods, the SLIC method performed better by 4% to 28% on the database used. PMID:9184892
Chaos, Fractals, and Polynomials.
ERIC Educational Resources Information Center
Tylee, J. Louis; Tylee, Thomas B.
1996-01-01
Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)
Predictive coding of depth images across multiple views
NASA Astrophysics Data System (ADS)
Morvan, Yannick; Farin, Dirk; de With, Peter H. N.
2007-02-01
A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictivecoding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 dB as compared to H.264 compression.
Biometric iris image acquisition system with wavefront coding technology
NASA Astrophysics Data System (ADS)
Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao
2013-09-01
Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code
Imaging in scattering media using correlation image sensors and sparse convolutional coding.
Heide, Felix; Xiao, Lei; Kolb, Andreas; Hullin, Matthias B; Heidrich, Wolfgang
2014-10-20
Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity. PMID:25401666
ERIC Educational Resources Information Center
Fraboni, Michael; Moller, Trisha
2008-01-01
Fractal geometry offers teachers great flexibility: It can be adapted to the level of the audience or to time constraints. Although easily explained, fractal geometry leads to rich and interesting mathematical complexities. In this article, the authors describe fractal geometry, explain the process of iteration, and provide a sample exercise.…
Hexagonal uniformly redundant arrays for coded-aperture imaging
NASA Technical Reports Server (NTRS)
Finger, M. H.; Prince, T. A.
1985-01-01
Uniformly redundant arrays are used in coded-aperture imaging, a technique for forming images without mirrors or lenses. The URAs constructed on hexagonal lattices, are outlined. Details are presented for the construction of a special class of URAs, the skew-Hadamard URAs, which have the following properties: (1) nearly half open and half closed (2) antisymmetric upon rotation by 180 deg except for the central cell and its repetitions. Some of the skew-Hadamard URAs constructed on a hexagonal lattice have additional symmetries. These special URAs that have a hexagonal unit pattern, and are antisymmetric upon rotation by 60 deg, called hexagonal uniformly redundant arrays (HURAs). The HURAs are particularly suited to gamma-ray imaging in high background situations. In a high background situation the best sensitivity is obtained with a half open and half closed mask. The hexagonal symmetry of an HURA is more appropriate for a round position-sensitive detector or a closed-packed array of detectors than a rectangular symmetry.
Fractal-based wideband invisibility cloak
NASA Astrophysics Data System (ADS)
Cohen, Nathan; Okoro, Obinna; Earle, Dan; Salkind, Phil; Unger, Barry; Yen, Sean; McHugh, Daniel; Polterzycki, Stefan; Shelman-Cohen, A. J.
2015-03-01
A wideband invisibility cloak (IC) at microwave frequencies is described. Using fractal resonators in closely spaced (sub wavelength) arrays as a minimal number of cylindrical layers (rings), the IC demonstrates that it is physically possible to attain a `see through' cloaking device with: (a) wideband coverage; (b) simple and attainable fabrication; (c) high fidelity emulation of the free path; (d) minimal side scattering; (d) a near absence of shadowing in the scattering. Although not a practical device, this fractal-enabled technology demonstrator opens up new opportunities for diverted-image (DI) technology and use of fractals in wideband optical, infrared, and microwave applications.
Adaptation of a neutron diffraction detector to coded aperture imaging
Vanier, P.E.; Forman, L.
1997-02-01
A coded aperture neutron imaging system developed at Brookhaven National Laboratory (BNL) has demonstrated that it is possible to record not only a flux of thermal neutrons at some position, but also the directions from whence they came. This realization of an idea which defied the conventional wisdom has provided a device which has never before been available to the nuclear physics community. A number of potential applications have been explored, including (1) counting warheads on a bus or in a storage area, (2) investigating inhomogeneities in drums of Pu-containing waste to facilitate non-destructive assays, (3) monitoring of vaults containing accountable materials, (4) detection of buried land mines, and (5) locating solid deposits of nuclear material held up in gaseous diffusion plants.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Characterizing Hyperspectral Imagery (AVIRIS) Using Fractal Technique
NASA Technical Reports Server (NTRS)
Qiu, Hong-Lie; Lam, Nina Siu-Ngan; Quattrochi, Dale
1997-01-01
With the rapid increase in hyperspectral data acquired by various experimental hyperspectral imaging sensors, it is necessary to develop efficient and innovative tools to handle and analyze these data. The objective of this study is to seek effective spatial analytical tools for summarizing the spatial patterns of hyperspectral imaging data. In this paper, we (1) examine how fractal dimension D changes across spectral bands of hyperspectral imaging data and (2) determine the relationships between fractal dimension and image content. It has been documented that fractal dimension changes across spectral bands for the Landsat-TM data and its value [(D)] is largely a function of the complexity of the landscape under study. The newly available hyperspectral imaging data such as that from the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) which has 224 bands, covers a wider spectral range with a much finer spectral resolution. Our preliminary result shows that fractal dimension values of AVIRIS scenes from the Santa Monica Mountains in California vary between 2.25 and 2.99. However, high fractal dimension values (D > 2.8) are found only from spectral bands with high noise level and bands with good image quality have a fairly stable dimension value (D = 2.5 - 2.6). This suggests that D can also be used as a summary statistics to represent the image quality or content of spectral bands.
Spatial super-resolution in code aperture spectral imaging
NASA Astrophysics Data System (ADS)
Arguello, Henry; Rueda, Hoover F.; Arce, Gonzalo R.
2012-06-01
The Code Aperture Snapshot Spectral Imaging system (CASSI) senses the spectral information of a scene using the underlying concepts of compressive sensing (CS). The random projections in CASSI are localized such that each measurement contains spectral information only from a small spatial region of the data cube. The goal of this paper is to translate high-resolution hyperspectral scenes into compressed signals measured by a low-resolution detector. Spatial super-resolution is attained as an inverse problem from a set of low-resolution coded measurements. The proposed system not only offers significant savings in size, weight and power, but also in cost as low resolution detectors can be used. The proposed system can be efficiently exploited in the IR region where the cost of detectors increases rapidly with resolution. The simulations of the proposed system show an improvement of up to 4 dB in PSNR. Results also show that the PSNR of the reconstructed data cubes approach the PSNR of the reconstructed data cubes attained with high-resolution detectors, at the cost of using additional measurements.
NASA Astrophysics Data System (ADS)
Tozian, Cynthia
A coded aperture1 plate was employed on a conventional gamma camera for 3D single photon emission computed tomography (SPECT) imaging on small animal models. The coded aperture design was selected to improve the spatial resolution and decrease the minimum detectable activity (MDA) required to image plaque formation in the APoE (apolipoprotein E) gene deficient mouse model when compared to conventional SPECT techniques. The pattern that was tested was a no-two-holes-touching (NTHT) modified uniformly redundant array (MURA) having 1,920 pinholes. The number of pinholes combined with the thin sintered tungsten plate was designed to increase the efficiency of the imaging modality over conventional gamma camera imaging methods while improving spatial resolution and reducing noise in the image reconstruction. The MDA required to image the vulnerable plaque in a human cardiac-torso mathematical phantom was simulated with a Monte Carlo code and evaluated to determine the optimum plate thickness by a receiver operating characteristic (ROC) yielding the lowest possible MDA and highest area under the curve (AUC). A partial 3D expectation maximization (EM) reconstruction was developed to improve signal-to-noise ratio (SNR), dynamic range, and spatial resolution over the linear correlation method of reconstruction. This improvement was evaluated by imaging a mini hot rod phantom, simulating the dynamic range, and by performing a bone scan of the C-57 control mouse. Results of the experimental and simulated data as well as other plate designs were analyzed for use as a small animal and potentially human cardiac imaging modality for a radiopharmaceutical developed at Bristol-Myers Squibb Medical Imaging Company, North Billerica, MA, for diagnosing vulnerable plaques. If left untreated, these plaques may rupture causing sudden, unexpected coronary occlusion and death. The results of this research indicated that imaging and reconstructing with this new partial 3D algorithm improved
Multiplexing of encrypted data using fractal masks.
Barrera, John F; Tebaldi, Myrian; Amaya, Dafne; Furlan, Walter D; Monsoriu, Juan A; Bolognini, Néstor; Torroba, Roberto
2012-07-15
In this Letter, we present to the best of our knowledge a new all-optical technique for multiple-image encryption and multiplexing, based on fractal encrypting masks. The optical architecture is a joint transform correlator. The multiplexed encrypted data are stored in a photorefractive crystal. The fractal parameters of the key can be easily tuned to lead to a multiplexing operation without cross talk effects. Experimental results that support the potential of the method are presented. PMID:22825170
A comparison of the fractal and JPEG algorithms
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Shahshahani, M.
1991-01-01
A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.
4D remote sensing image coding with JPEG2000
NASA Astrophysics Data System (ADS)
Muñoz-Gómez, Juan; Bartrina-Rapesta, Joan; Blanes, Ian; Jiménez-Rodríguez, Leandro; Aulí-Llinàs, Francesc; Serra-Sagristà, Joan
2010-08-01
Multicomponent data have become popular in several scientific fields such as forest monitoring, environmental studies, or sea water temperature detection. Nowadays, this multicomponent data can be collected more than one time per year for the same region. This generates different instances in time of multicomponent data, also called 4D-Data (1D Temporal + 1D Spectral + 2D Spatial). For multicomponent data, it is important to take into account inter-band redundancy to produce a more compact representation of the image by packing the energy into fewer number of bands, thus enabling a higher compression performance. The principal decorrelators used to compact the inter-band correlation redundancy are the Karhunen Loeve Transform (KLT) and Discrete Wavelet Transform (DWT). Because of the Temporal Dimension added, the inter-band redundancy among different multicomponent images is increased. In this paper we analyze the influence of the Temporal Dimension (TD) and the Spectral Dimension (SD) in 4D-Data in terms of coding performance for JPEG2000, because it has support to apply different decorrelation stages and transforms to the components through the different dimensions. We evaluate the influence to perform different decorrelators techniques to the different dimensions. Also we will assess the performance of the two main decorrelation techniques, KLT and DWT. Experimental results are provided, showing rate-distortion performances encoding 4D-Data using KLT and WT techniques to the different dimensions TD and SD.
A CMOS Imager with Focal Plane Compression using Predictive Coding
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
A Probabilistic Analysis of Sparse Coded Feature Pooling and Its Application for Image Retrieval
Zhang, Yunchao; Chen, Jing; Huang, Xiujie; Wang, Yongtian
2015-01-01
Feature coding and pooling as a key component of image retrieval have been widely studied over the past several years. Recently sparse coding with max-pooling is regarded as the state-of-the-art for image classification. However there is no comprehensive study concerning the application of sparse coding for image retrieval. In this paper, we first analyze the effects of different sampling strategies for image retrieval, then we discuss feature pooling strategies on image retrieval performance with a probabilistic explanation in the context of sparse coding framework, and propose a modified sum pooling procedure which can improve the retrieval accuracy significantly. Further we apply sparse coding method to aggregate multiple types of features for large-scale image retrieval. Extensive experiments on commonly-used evaluation datasets demonstrate that our final compact image representation improves the retrieval accuracy significantly. PMID:26132080
NASA Astrophysics Data System (ADS)
Wang, X.; Yuan, Y.; Zhang, J.; Chen, Y.; Cheng, Y.
2014-10-01
High resolution infrared image has been the goal of an infrared imaging system. In this paper, a super-resolution infrared imaging method using time-varying coded mask is proposed based on focal plane coding and compressed sensing theory. The basic idea of this method is to set a coded mask on the focal plane of the optical system, and the same scene could be sampled many times repeatedly by using time-varying control coding strategy, the super-resolution image is further reconstructed by sparse optimization algorithm. The results of simulation are quantitatively evaluated by introducing the Peak Signal-to-Noise Ratio (PSNR) and Modulation Transfer Function (MTF), which illustrate that the effect of compressed measurement coefficient r and coded mask resolution m on the reconstructed image quality. Research results show that the proposed method will promote infrared imaging quality effectively, which will be helpful for the practical design of new type of high resolution ! infrared imaging systems.
Mixture of Subspaces Image Representation and Compact Coding for Large-Scale Image Retrieval.
Takahashi, Takashi; Kurita, Takio
2015-07-01
There are two major approaches to content-based image retrieval using local image descriptors. One is descriptor-by-descriptor matching and the other is based on comparison of global image representation that describes the set of local descriptors of each image. In large-scale problems, the latter is preferred due to its smaller memory requirements; however, it tends to be inferior to the former in terms of retrieval accuracy. To achieve both low memory cost and high accuracy, we investigate an asymmetric approach in which the probability distribution of local descriptors is modeled for each individual database image while the local descriptors of a query are used as is. We adopt a mixture model of probabilistic principal component analysis. The model parameters constitute a global image representation to be stored in database. Then the likelihood function is employed to compute a matching score between each database image and a query. We also propose an algorithm to encode our image representation into more compact codes. Experimental results demonstrate that our method can represent each database image in less than several hundred bytes achieving higher retrieval accuracy than the state-of-the-art method using Fisher vectors. PMID:26352453
Novel optical password security technique based on optical fractal synthesizer
NASA Astrophysics Data System (ADS)
Wu, Kenan; Hu, Jiasheng; Wu, Xu
2009-06-01
A novel optical security technique for safeguarding user passwords based on an optical fractal synthesizer is proposed. A validating experiment has been carried out. In the proposed technique, a user password is protected by being converted to a fractal image. When a user sets up a new password, the password is transformed into a fractal pattern, and the fractal pattern is stored in authority. If the user is online-validated, his or her password is converted to a fractal pattern again to compare with the previous stored fractal pattern. The converting process is called the fractal encoding procedure, which consists of two steps. First, the password is nonlinearly transformed to get the parameters for the optical fractal synthesizer. Then the optical fractal synthesizer is operated to generate the output fractal image. The experimental result proves the validity of our method. The proposed technique bridges the gap between digital security systems and optical security systems and has many advantages, such as high security level, convenience, flexibility, hyper extensibility, etc. This provides an interesting optical security technique for the protection of digital passwords.
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method. PMID:25119982
A blind dual color images watermarking based on IWT and state coding
NASA Astrophysics Data System (ADS)
Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu
2012-04-01
In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.
Edges of Saturn's rings are fractal.
Li, Jun; Ostoja-Starzewski, Martin
2015-01-01
The images recently sent by the Cassini spacecraft mission (on the NASA website http://saturn.jpl.nasa.gov/photos/halloffame/) show the complex and beautiful rings of Saturn. Over the past few decades, various conjectures were advanced that Saturn's rings are Cantor-like sets, although no convincing fractal analysis of actual images has ever appeared. Here we focus on four images sent by the Cassini spacecraft mission (slide #42 "Mapping Clumps in Saturn's Rings", slide #54 "Scattered Sunshine", slide #66 taken two weeks before the planet's Augus't 200'9 equinox, and slide #68 showing edge waves raised by Daphnis on the Keeler Gap) and one image from the Voyager 2' mission in 1981. Using three box-counting methods, we determine the fractal dimension of edges of rings seen here to be consistently about 1.63 ~ 1.78. This clarifies in what sense Saturn's rings are fractal. PMID:25883885
Adaptive lifting scheme with sparse criteria for image coding
NASA Astrophysics Data System (ADS)
Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe
2012-12-01
Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an ℓ 1 criterion instead of an ℓ 2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted ℓ 1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.
A novel approach to correct the coded aperture misalignment for fast neutron imaging
NASA Astrophysics Data System (ADS)
Zhang, F. N.; Hu, H. S.; Zhang, T. K.; Jia, Q. G.; Wang, D. M.; Jia, J.
2015-12-01
Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, "residual watermark," certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.
A novel approach to correct the coded aperture misalignment for fast neutron imaging
Zhang, F. N.; Hu, H. S. Wang, D. M.; Jia, J.; Zhang, T. K.; Jia, Q. G.
2015-12-15
Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.
A novel approach to correct the coded aperture misalignment for fast neutron imaging.
Zhang, F N; Hu, H S; Zhang, T K; Jia, Q G; Wang, D M; Jia, J
2015-12-01
Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, "residual watermark," certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging. PMID:26724035
Fractal analysis of DNA sequence data
Berthelsen, C.L.
1993-01-01
DNA sequence databases are growing at an almost exponential rate. New analysis methods are needed to extract knowledge about the organization of nucleotides from this vast amount of data. Fractal analysis is a new scientific paradigm that has been used successfully in many domains including the biological and physical sciences. Biological growth is a nonlinear dynamic process and some have suggested that to consider fractal geometry as a biological design principle may be most productive. This research is an exploratory study of the application of fractal analysis to DNA sequence data. A simple random fractal, the random walk, is used to represent DNA sequences. The fractal dimension of these walks is then estimated using the [open quote]sandbox method[close quote]. Analysis of 164 human DNA sequences compared to three types of control sequences (random, base-content matched, and dimer-content matched) reveals that long-range correlations are present in DNA that are not explained by base or dimer frequencies. The study also revealed that the fractal dimension of coding sequences was significantly lower than sequences that were primarily noncoding, indicating the presence of longer-range correlations in functional sequences. The multifractal spectrum is used to analyze fractals that are heterogeneous and have a different fractal dimension for subsets with different scalings. The multifractal spectrum of the random walks of twelve mitochondrial genome sequences was estimated. Eight vertebrate mtDNA sequences had uniformly lower spectra values than did four invertebrate mtDNA sequences. Thus, vertebrate mitochondria show significantly longer-range correlations than to invertebrate mitochondria. The higher multifractal spectra values for invertebrate mitochondria suggest a more random organization of the sequences. This research also includes considerable theoretical work on the effects of finite size, embedding dimension, and scaling ranges.
Fractal Analysis of DNA Sequence Data
NASA Astrophysics Data System (ADS)
Berthelsen, Cheryl Lynn
DNA sequence databases are growing at an almost exponential rate. New analysis methods are needed to extract knowledge about the organization of nucleotides from this vast amount of data. Fractal analysis is a new scientific paradigm that has been used successfully in many domains including the biological and physical sciences. Biological growth is a nonlinear dynamic process and some have suggested that to consider fractal geometry as a biological design principle may be most productive. This research is an exploratory study of the application of fractal analysis to DNA sequence data. A simple random fractal, the random walk, is used to represent DNA sequences. The fractal dimension of these walks is then estimated using the "sandbox method." Analysis of 164 human DNA sequences compared to three types of control sequences (random, base -content matched, and dimer-content matched) reveals that long-range correlations are present in DNA that are not explained by base or dimer frequencies. The study also revealed that the fractal dimension of coding sequences was significantly lower than sequences that were primarily noncoding, indicating the presence of longer-range correlations in functional sequences. The multifractal spectrum is used to analyze fractals that are heterogeneous and have a different fractal dimension for subsets with different scalings. The multifractal spectrum of the random walks of twelve mitochondrial genome sequences was estimated. Eight vertebrate mtDNA sequences had uniformly lower spectra values than did four invertebrate mtDNA sequences. Thus, vertebrate mitochondria show significantly longer-range correlations than do invertebrate mitochondria. The higher multifractal spectra values for invertebrate mitochondria suggest a more random organization of the sequences. This research also includes considerable theoretical work on the effects of finite size, embedding dimension, and scaling ranges.
A new balanced modulation code for a phase-image-based holographic data storage system
NASA Astrophysics Data System (ADS)
John, Renu; Joseph, Joby; Singh, Kehar
2005-08-01
We propose a new balanced modulation code for coding data pages for phase-image-based holographic data storage systems. The new code addresses the coding subtleties associated with phase-based systems while performing a content-based search in a holographic database. The new code, which is a balanced modulation code, is a modification of the existing 8:12 modulation code, and removes the false hits that occur in phase-based content-addressable systems due to phase-pixel subtractions. We demonstrate the better performance of the new code using simulations and experiments in terms of discrimination ratio while content addressing through a holographic memory. The new code is compared with the conventional coding scheme to analyse the false hits due to subtraction of phase pixels.
Exploring Fractals in the Classroom.
ERIC Educational Resources Information Center
Naylor, Michael
1999-01-01
Describes an activity involving six investigations. Introduces students to fractals, allows them to study the properties of some famous fractals, and encourages them to create their own fractal artwork. Contains 14 references. (ASK)
Wireless image transmission using turbo codes and optimal unequal error protection.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2005-11-01
A novel image transmission scheme is proposed for the communication of set partitioning in hierarchical trees image streams over wireless channels. The proposed scheme employs turbo codes and Reed-Solomon codes in order to deal effectively with burst errors. An algorithm for the optimal unequal error protection of the compressed bitstream is also proposed and applied in conjunction with an inherently more efficient technique for product code decoding. The resulting scheme is tested for the transmission of images over wireless channels. Experimental evaluation clearly demonstrates the superiority of the proposed transmission system in comparison to well-known robust coding schemes. PMID:16279187
Fractals: To Know, to Do, to Simulate.
ERIC Educational Resources Information Center
Talanquer, Vicente; Irazoque, Glinda
1993-01-01
Discusses the development of fractal theory and suggests fractal aggregates as an attractive alternative for introducing fractal concepts. Describes methods for producing metallic fractals and a computer simulation for drawing fractals. (MVL)
Hands-On Fractals and the Unexpected in Mathematics
ERIC Educational Resources Information Center
Gluchoff, Alan
2006-01-01
This article describes a hands-on project in which unusual fractal images are produced using only a photocopy machine and office supplies. The resulting images are an example of the contraction mapping principle.
Fractal Geometry of Architecture
NASA Astrophysics Data System (ADS)
Lorenz, Wolfgang E.
In Fractals smaller parts and the whole are linked together. Fractals are self-similar, as those parts are, at least approximately, scaled-down copies of the rough whole. In architecture, such a concept has also been known for a long time. Not only architects of the twentieth century called for an overall idea that is mirrored in every single detail, but also Gothic cathedrals and Indian temples offer self-similarity. This study mainly focuses upon the question whether this concept of self-similarity makes architecture with fractal properties more diverse and interesting than Euclidean Modern architecture. The first part gives an introduction and explains Fractal properties in various natural and architectural objects, presenting the underlying structure by computer programmed renderings. In this connection, differences between the fractal, architectural concept and true, mathematical Fractals are worked out to become aware of limits. This is the basis for dealing with the problem whether fractal-like architecture, particularly facades, can be measured so that different designs can be compared with each other under the aspect of fractal properties. Finally the usability of the Box-Counting Method, an easy-to-use measurement method of Fractal Dimension is analyzed with regard to architecture.
Study on GEANT4 code applications to dose calculation using imaging data
NASA Astrophysics Data System (ADS)
Lee, Jeong Ok; Kang, Jeong Ku; Kim, Jhin Kee; Kwon, Hyeong Cheol; Kim, Jung Soo; Kim, Bu Gil; Jeong, Dong Hyeok
2015-07-01
The use of the GEANT4 code has increased in the medical field. Various studies have calculated the patient dose distributions by users the GEANT4 code with imaging data. In present study, Monte Carlo simulations based on DICOM data were performed to calculate the dose absorb in the patient's body. Various visualization tools are installed in the GEANT4 code to display the detector construction; however, the display of DICOM images is limited. In addition, to displaying the dose distributions on the imaging data of the patient is difficult. Recently, the gMocren code, a volume visualization tool for GEANT4 simulation, was developed and has been used in volume visualization of image files. In this study, the imaging based on the dose distributions absorbed in the patients was performed by using the gMocren code. Dosimetric evaluations with were carried out by using thermo luminescent dosimeter and film dosimetry to verify the calculated results.
FAST TRACK COMMUNICATION: Weyl law for fat fractals
NASA Astrophysics Data System (ADS)
Spina, María E.; García-Mata, Ignacio; Saraceno, Marcos
2010-10-01
It has been conjectured that for a class of piecewise linear maps the closure of the set of images of the discontinuity has the structure of a fat fractal, that is, a fractal with positive measure. An example of such maps is the sawtooth map in the elliptic regime. In this work we analyze this problem quantum mechanically in the semiclassical regime. We find that the fraction of states localized on the unstable set satisfies a modified fractal Weyl law, where the exponent is given by the exterior dimension of the fat fractal.
GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION
NASA Astrophysics Data System (ADS)
Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič
2015-10-01
In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.
Fractal dimension analyses of lava surfaces and flow boundaries
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.
1993-01-01
An improved method of estimating fractal surface dimensions has been developed. The accuracy of this method is illustrated using artificially generated fractal surfaces. A slightly different from usual concept of linear dimension is developed, allowing a direct link between that and the corresponding surface dimension estimate. These methods are applied to a series of images of lava flows, representing a variety of physical and chemical conditions. These include lavas from California, Idaho, and Hawaii, as well as some extraterrestrial flows. The fractal surface dimension estimations are presented, as well as the fractal line dimensions where appropriate.
Snapshot 2D tomography via coded aperture x-ray scatter imaging
MacCabe, Kenneth P.; Holmgren, Andrew D.; Tornai, Martin P.; Brady, David J.
2015-01-01
This paper describes a fan beam coded aperture x-ray scatter imaging system which acquires a tomographic image from each snapshot. This technique exploits cylindrical symmetry of the scattering cross section to avoid the scanning motion typically required by projection tomography. We use a coded aperture with a harmonic dependence to determine range, and a shift code to determine cross-range. Here we use a forward-scatter configuration to image 2D objects and use serial exposures to acquire tomographic video of motion within a plane. Our reconstruction algorithm also estimates the angular dependence of the scattered radiance, a step toward materials imaging and identification. PMID:23842254
The use of fractal dimension for texture classification
Dixon, K.E.
1989-04-01
This paper addresses the idea of using fractal dimension as a measure of image texture. The computation of the fractal dimension of a grey-scale image and also of the ''fractal signature'' of the image is presented. Two methods of scanning the image for these calculations are introduced: a line scan and a window scan. Several subsets of features extracted from the calculations are investigated as features for classification of the texture. Results from various classification experiments are presented. 5 refs., 8 tabs.
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.
Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf
2016-01-01
One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode. PMID:26561433
Coded illumination for motion-blur free imaging of cells on cell-phone based imaging flow cytometer
NASA Astrophysics Data System (ADS)
Saxena, Manish; Gorthi, Sai Siva
2014-10-01
Cell-phone based imaging flow cytometry can be realized by flowing cells through the microfluidic devices, and capturing their images with an optically enhanced camera of the cell-phone. Throughput in flow cytometers is usually enhanced by increasing the flow rate of cells. However, maximum frame rate of camera system limits the achievable flow rate. Beyond this, the images become highly blurred due to motion-smear. We propose to address this issue with coded illumination, which enables recovery of high-fidelity images of cells far beyond their motion-blur limit. This paper presents simulation results of deblurring the synthetically generated cell/bead images under such coded illumination.
Subband Image Coding Using Entropy-Constrained Residual Vector Quantization.
ERIC Educational Resources Information Center
Kossentini, Faouzi; And Others
1994-01-01
Discusses a flexible, high performance subband coding system. Residual vector quantization is discussed as a basis for coding subbands, and subband decomposition and bit allocation issues are covered. Experimental results showing the quality achievable at low bit rates are presented. (13 references) (KRN)
Fractal analysis of yeast cell optical speckle
NASA Astrophysics Data System (ADS)
Flamholz, A.; Schneider, P. S.; Subramaniam, R.; Wong, P. K.; Lieberman, D. H.; Cheung, T. D.; Burgos, J.; Leon, K.; Romero, J.
2006-02-01
Steady state laser light propagation in diffuse media such as biological cells generally provide bulk parameter information, such as the mean free path and absorption, via the transmission profile. The accompanying optical speckle can be analyzed as a random spatial data series and its fractal dimension can be used to further classify biological media that show similar mean free path and absorption properties, such as those obtained from a single population. A population of yeast cells can be separated into different portions by centrifuge, and microscope analysis can be used to provide the population statistics. Fractal analysis of the speckle suggests that lower fractal dimension is associated with higher cell packing density. The spatial intensity correlation revealed that the higher cell packing gives rise to higher refractive index. A calibration sample system that behaves similar as the yeast samples in fractal dimension, spatial intensity correlation and diffusion was selected. Porous silicate slabs with different refractive index values controlled by water content were used for system calibration. The porous glass as well as the yeast random spatial data series fractal dimension was found to depend on the imaging resolution. The fractal method was also applied to fission yeast single cell fluorescent data as well as aging yeast optical data; and consistency was demonstrated. It is concluded that fractal analysis can be a high sensitivity tool for relative comparison of cell structure but that additional diffusion measurements are necessary for determining the optimal image resolution. Practical application to dental plaque bio-film and cam-pill endoscope images was also demonstrated.
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-01-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. PMID:27217194
Namazi, Hamidreza; Kulish, Vladimir V; Akrami, Amin
2016-01-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the 'complex' visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. PMID:27217194
NASA Astrophysics Data System (ADS)
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-05-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.
NASA Astrophysics Data System (ADS)
Chang, Kuo-En; Lin, Tang-Huang; Lien, Wei-Hung
2015-04-01
Anthropogenic pollutants or smoke from biomass burning contribute significantly to global particle aggregation emissions, yet their aggregate formation and resulting ensemble optical properties are poorly understood and parameterized in climate models. Particle aggregation refers to formation of clusters in a colloidal suspension. In clustering algorithms, many parameters, such as fractal dimension, number of monomers, radius of monomer, and refractive index real part and image part, will alter the geometries and characteristics of the fractal aggregation and change ensemble optical properties further. The cluster-cluster aggregation algorithm (CCA) is used to specify the geometries of soot and haze particles. In addition, the Generalized Multi-particle Mie (GMM) method is utilized to compute the Mie solution from a single particle to the multi particle case. This computer code for the calculation of the scattering by an aggregate of spheres in a fixed orientation and the experimental data have been made publicly available. This study for the model inputs of optical determination of the monomer radius, the number of monomers per cluster, and the fractal dimension is presented. The main aim in this study is to analyze and contrast several parameters of cluster aggregation aforementioned which demonstrate significant differences of optical properties using the GMM method finally. Keywords: optical properties, fractal aggregation, GMM, CCA
Fractal Analysis of Cervical Intraepithelial Neoplasia
Fabrizii, Markus; Moinfar, Farid; Jelinek, Herbert F.; Karperien, Audrey; Ahammer, Helmut
2014-01-01
Introduction Cervical intraepithelial neoplasias (CIN) represent precursor lesions of cervical cancer. These neoplastic lesions are traditionally subdivided into three categories CIN 1, CIN 2, and CIN 3, using microscopical criteria. The relation between grades of cervical intraepithelial neoplasia (CIN) and its fractal dimension was investigated to establish a basis for an objective diagnosis using the method proposed. Methods Classical evaluation of the tissue samples was performed by an experienced gynecologic pathologist. Tissue samples were scanned and saved as digital images using Aperio scanner and software. After image segmentation the box counting method as well as multifractal methods were applied to determine the relation between fractal dimension and grades of CIN. A total of 46 images were used to compare the pathologist's neoplasia grades with the predicted groups obtained by fractal methods. Results Significant or highly significant differences between all grades of CIN could be found. The confusion matrix, comparing between pathologist's grading and predicted group by fractal methods showed a match of 87.1%. Multifractal spectra were able to differentiate between normal epithelium and low grade as well as high grade neoplasia. Conclusion Fractal dimension can be considered to be an objective parameter to grade cervical intraepithelial neoplasia. PMID:25302712
Fractal structures and processes
Bassingthwaighte, J.B.; Beard, D.A.; Percival, D.B.; Raymond, G.M.
1996-06-01
Fractals and chaos are closely related. Many chaotic systems have fractal features. Fractals are self-similar or self-affine structures, which means that they look much of the same when magnified or reduced in scale over a reasonably large range of scales, at least two orders of magnitude and preferably more (Mandelbrot, 1983). The methods for estimating their fractal dimensions or their Hurst coefficients, which summarize the scaling relationships and their correlation structures, are going through a rapid evolutionary phase. Fractal measures can be regarded as providing a useful statistical measure of correlated random processes. They also provide a basis for analyzing recursive processes in biology such as the growth of arborizing networks in the circulatory system, airways, or glandular ducts. {copyright} {ital 1996 American Institute of Physics.}
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission. PMID:16900669
Reduction of artefacts and noise for a wavefront coding athermalized infrared imaging system
NASA Astrophysics Data System (ADS)
Feng, Bin; Zhang, Xiaodong; Shi, Zelin; Xu, Baoshu; Zhang, Chengshuo
2016-07-01
Because of obvious drawbacks including serious artefacts and noise in a decoded image, the existing wavefront coding infrared imaging systems are seriously restricted in application. The proposed ultra-precision diamond machining technique manufactures an optical phase mask with a form manufacturing errors of approximately 770 nm and a surface roughness value Ra of 5.44 nm. The proposed decoding method outperforms the classical Wiener filtering method in three indices of mean square errors, mean structural similarity index and noise equivalent temperature difference. Based on the results mentioned above and a basic principle of wavefront coding technique, this paper further develops a wavefront coding infrared imaging system. Experimental results prove that our wavefront coding infrared imaging system yields a decoded image with good quality over a temperature range from ‑40 °C to +70 °C.
Lossless compression scheme of superhigh-definition images by partially decodable Golomb-Rice code
NASA Astrophysics Data System (ADS)
Kato, Shigeo; Hasegawa, Madoka; Guo, Muling
1998-12-01
Multimedia communication systems using super high definition (SHD) images are widely desired in various communities such as medical imagery, digital museum, digital libraries and so on. There are, however, many requirements in SHD image communication systems, because of high pixel accuracy and high resolution of a SHD image. We considered mandatory functions that should be realized in SHD image application systems, as summarized to three items, i.e, reversibility, scalability and progressibility. This paper proposes an SHD image communication systems based on reversibility, scalability and progressibility. To realize reversibility and progressibility, a lossless wavelet transform coding method is introduced as a coding model. To realize scalability, a partially decodable entropy code is proposed. Especially, we focus on a partially decodable coding method for realizing the scalability function in this paper.
Wave-front coded optical readout for the MEMS-based uncooled infrared imaging system
NASA Astrophysics Data System (ADS)
Li, Tian; Zhao, Yuejin; Dong, Liquan; Liu, Xiaohua; Jia, Wei; Hui, Mei; Yu, Xiaomei; Gong, Cheng; Liu, Weiyu
2012-11-01
In the space limited infrared imaging system based MEMS, the adjustment of optical readout part is inconvenient. This paper proposed a method of wave-front coding to extend the depth of focus/field of the optical readout system, to solve the problem above, and to reduce the demanding for precision in processing and assemblage of the optical readout system itself as well. The wave-front coded imaging system consists of optical coding and digital decoding. By adding a CPM (Cubic Phase Mask) on the pupil plane, it becomes non-sensitive to defocussing within an extended range. The system has similar PSFs and almost equally blurred intermediate images can be obtained. Sharp images are supposed to be acquired based on image restoration algorithms, with the same PSF as a decoding core. We studied the conventional optical imaging system, which had the same optical performance with the wave-front coding one for comparing. Analogue imaging experiments were carried out. And one PSF was used as a simple direct inverse filter, for imaging restoration. Relatively sharp restored images were obtained. Comparatively, the analogue defocussing images of the conventional system were badly destroyed. Using the decrease of the MTF as a standard, we found the depth of focus/field of the wave-front coding system had been extended significantly.
NASA Astrophysics Data System (ADS)
Chen, Di; Dong, Liquan; Zhao, Yuejin; Liu, Xiaohua; Li, Cuiling; Guo, Xiaohu; Zhao, Zhu; Wu, Yijian
2014-09-01
We describe the application of wavefront coding technique for infrared imaging system to control thermal defocus. For traditional infrared imaging system, athermalization is necessary to maintain imaging performance which may increase complexity and cost of the imaging system. Wavefront coding includes a phase mask at the pupil which can re-modulate wave front so as to produce an encoded image. After digital processing, the system is insensitive to defocus. In this paper, the combination of wavefront coding technique and infrared imaging system has been discussed. We report here the optic design of the wavefront coding IR system based on Zemax. The phase mask is designed to ensure that the modulation transfer function (MTF) is approximately invariant in the range of working temperature. Meanwhile, we designed three IR systems to put up contrast experiments. System one and two are designed to compare the influence before and after the insertion of phase mask. System three is designed to compare the imaging performance before and after reducing lens in wavefront coding IR system. The simulation results show that the infrared imaging system based on wavefront coding can control thermal defocus in a temperature varying from -60ºC to 60 ºC, at the same time the weight and cost of optical elements are reduced by approximately 40%.
Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Coat, Véronique
2008-06-01
The joint source-channel coding system proposed in this paper has two aims: lossless compression with a progressive mode and the integrity of medical data, which takes into account the priorities of the image and the properties of a network with no guaranteed quality of service. In this context, the use of scalable coding, locally adapted resolution (LAR) and a discrete and exact Radon transform, known as the Mojette transform, meets this twofold requirement. In this paper, details of this joint coding implementation are provided as well as a performance evaluation with respect to the reference CALIC coding and to unequal error protection using Reed-Solomon codes. PMID:18289830
A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy
NASA Technical Reports Server (NTRS)
Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.
1985-01-01
A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.
Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging
NASA Astrophysics Data System (ADS)
Diaz, Nelson; Rueda, Hoover; Arguello, Henry
2016-05-01
Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.
Fractal metrology for biogeosystems analysis
NASA Astrophysics Data System (ADS)
Torres-Argüelles, V.; Oleschko, K.; Tarquis, A. M.; Korvin, G.; Gaona, C.; Parrot, J.-F.; Ventura-Ramos, E.
2010-06-01
The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes. In the present research, this pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate'' Clay) and compared in terms of roughness of the gray-intensity distribution (the measurand) quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them and to the measurement function which best fits to the experimental results. Some of the applied techniques are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc.) while the others have been recently developed by our Group. The combination of all these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM). We show the usefulness of FM through a case study of soil physical and chemical degradation applying the selected toolbox to describe and compare the main structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.
Fractal Metrology for biogeosystems analysis
NASA Astrophysics Data System (ADS)
Torres-Argüelles, V.; Oleschko, K.; Tarquis, A. M.; Korvin, G.; Gaona, C.; Parrot, J.-F.; Ventura-Ramos, E.
2010-11-01
The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes of this complex system. In the present research, we studied the aggregation process as self-organizing and operating near a critical point. The structural pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate" Clay) and compared in terms of roughness of the gray-intensity distribution quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them measured in terms of standard deviation. Some of the applied methods are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc.) while the others have been recently developed by our Group. The combination of these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM). We show the usefulness of FM for complex systems analysis through a case study of the soil's physical and chemical degradation applying the selected toolbox to describe and compare the structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.
NASA Astrophysics Data System (ADS)
Mandal, Prantik; Rodkin, Mikhail V.
2011-11-01
We use precisely located aftershocks of the 2001 Mw7.7 Bhuj earthquake (2001-2009) to explore the structure of the Kachchh seismic zone by mapping the 3-D distributions of b-value, fractal dimension (D) and seismic velocities. From frequency-magnitude analysis, we find that the catalog is complete above Mw = 3.0. Thus, we analyze 2159 aftershocks with Mw ≥ 3.0 to estimate the 3-D distribution of b-value and fractal dimensions using maximum-likelihood and spatial correlation dimension approaches, respectively. Our results show an area of high b-, D- and Vp/Vs ratio values at 15-35 km depth in the main rupture zone (MRZ), while relatively low b- and D values characterize the surrounding rigid regions and Gedi fault (GF) zone. We propose that higher material heterogeneities in the vicinity of the MRZ and/or circulation of deep aqueous fluid/volatile CO 2 is the main cause of the increased b-, D- and Vp/Vs ratio values at 15-35 km depth. Seismic velocity images also show some low velocity zones continuing in to the deep lower crust, supporting the existence of circulation of deep aqueous fluid / volatile CO 2 in the region (probably released from the eclogitasation of olivine rich lower crustal rocks). The presence of number of high and low velocity patches further reveals the heterogeneous and fractured nature of the MRZ. Interestingly, we observe that Aki (1981)'s relation (D = 2b) is not valid for the spatial b-D correlation of the events in the GF (D 2 = 1.2b) zone. However, the events in the MRZ (D 2 = 1.7b) show a fair agreement with the D = 2b relationship while the earthquakes associated with the remaining parts of the aftershock zone (D 2 = 1.95b) show a strong correlation with the Aki (1981)'s relationship. Thus, we infer that the remaining parts of the aftershock zone are probably behaving like locked un-ruptured zones, where larger stresses accumulate. We also propose that deep fluid involvement may play a key role in generating seismic activity in the
Electromagnetic fields in fractal continua
NASA Astrophysics Data System (ADS)
Balankin, Alexander S.; Mena, Baltasar; Patiño, Julián; Morales, Daniel
2013-04-01
Fractal continuum electrodynamics is developed on the basis of a model of three-dimensional continuum ΦD3⊂E3 with a fractal metric. The generalized forms of Maxwell equations are derived employing the local fractional vector calculus related to the Hausdorff derivative. The difference between the fractal continuum electrodynamics based on the fractal metric of continua with Euclidean topology and the electrodynamics in fractional space Fα accounting the fractal topology of continuum with the Euclidean metric is outlined. Some electromagnetic phenomena in fractal media associated with their fractal time and space metrics are discussed.
NASA Astrophysics Data System (ADS)
Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.
2015-03-01
A fast and accurate scatter imaging technique to differentiate cancerous and healthy breast tissue is introduced in this work. Such a technique would have wide-ranging clinical applications from intra-operative margin assessment to breast cancer screening. Coherent Scatter Computed Tomography (CSCT) has been shown to differentiate cancerous from healthy tissue, but the need to raster scan a pencil beam at a series of angles and slices in order to reconstruct 3D images makes it prohibitively time consuming. In this work we apply the coded aperture coherent scatter spectral imaging technique to reconstruct 3D images of breast tissue samples from experimental data taken without the rotation usually required in CSCT. We present our experimental implementation of coded aperture scatter imaging, the reconstructed images of the breast tissue samples and segmentations of the 3D images in order to identify the cancerous and healthy tissue inside of the samples. We find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside of them. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside of ex vivo samples within a time on the order of a minute.
An improved deconvolution method for X-ray coded imaging in inertial confinement fusion
NASA Astrophysics Data System (ADS)
Zhao, Zong-Qing; He, Wei-Hua; Wang, Jian; Hao, Yi-Dan; Cao, Lei-Feng; Gu, Yu-Qiu; Zhang, Bao-Han
2013-10-01
In inertial confinement fusion (ICF), X-ray coded imaging is considered as the most potential means to diagnose the compressed core. The traditional Richardson—Lucy (RL) method has a strong ability to deblur the image where the noise follows the Poisson distribution. However, it always suffers from over-fitting and noise amplification, especially when the signal-to-noise ratio of image is relatively low. In this paper, we propose an improved deconvolution method for X-ray coded imaging. We model the image data as a set of independent Gaussian distributions and derive the iterative solution with a maximum-likelihood scheme. The experimental results on X-ray coded imaging data demonstrate that this method is superior to the RL method in terms of anti-overfitting and noise suppression.
Research on spatial coding compressive spectral imaging and its applicability for rural survey
NASA Astrophysics Data System (ADS)
Chen, Yuheng; Ji, Yiqun; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin
Compressive spectral imaging combines traditional spectral imaging method with new concept of compressive sensing thus has the advantages such as reducing acquisition data amount, realizing snapshot imaging for large field of view and increasing image signal-to-noise and its preliminary application effectiveness has been explored by early usage on the occasions such as high-speed imaging and fluorescent imaging. In this paper, the application potentiality for spatial coding compressive spectral imaging technique on rural survey is revealed. The physical model for spatial coding compressive spectral imaging is built on which its data flow procession is analyzed and its data reconstruction issue is concluded. The existing sparse reconstruction methods are reviewed thus specific module based on the two-step iterative shrinkage/thresholding algorithm is built so as to execute the imaging data reconstruction. The simulating imaging experiment based on AVIRIS visible band data of a specific selected rural scene is carried out. The spatial identification and spectral featuring extraction capacity for different ground species are evaluated by visual judgment of both single band image and spectral curve. The data fidelity evaluation parameters (RMSE and PSNR) are put forward so as to verify the data fidelity maintaining ability of this compressive imaging method quantitatively. The application potentiality of spatial coding compressive spectral imaging on rural survey, crop monitoring, vegetation inspection and further agricultural development demand is verified in this paper.
Investigation of Coded Source Neutron Imaging at the North Carolina State University PULSTAR Reactor
Xiao, Ziyu; Mishra, Kaushal; Hawari, Ayman; Bingham, Philip R; Bilheux, Hassina Z; Tobin Jr, Kenneth William
2010-10-01
A neutron imaging facility is located on beam-tube #5 of the 1-MWth PULSTAR reactor at the North Carolina State University. An investigation has been initiated to explore the application of coded imaging techniques at the facility. Coded imaging uses a mosaic of pinholes to encode an aperture, thus generating an encoded image of the object at the detector. To reconstruct the image recorded by the detector, corresponding decoding patterns are used. The optimized design of coded masks is critical for the performance of this technique and will depend on the characteristics of the imaging beam. In this work, Monte Carlo (MCNP) simulations were utilized to explore the needed modifications to the PULSTAR thermal neutron beam to support coded imaging techniques. In addition, an assessment of coded mask design has been performed. The simulations indicated that a 12 inch single crystal sapphire filter is suited for such an application at the PULSTAR beam in terms of maximizing flux with good neutron-to-gamma ratio. Computational simulations demonstrate the feasibility of correlation reconstruction methods on neutron transmission imaging. A gadolinium aperture with thickness of 500 m was used to construct the mask using a 38 34 URA pattern. A test experiment using such a URA design has been conducted and the point spread function of the system has been measured.
Turbo codes-based image transmission for channels with multiple types of distortion.
Yao, Lei; Cao, Lei
2008-11-01
Product codes are generally used for progressive image transmission when random errors and packet loss (or burst errors) co-exist. However, the optimal rate allocation considering both component codes gives rise to high-optimization complexity. In addition, the decoding performance may be degraded quickly when the channel varies beyond the design point. In this paper, we propose a new unequal error protection (UEP) scheme for progressive image transmission by using rate-compatible punctured Turbo codes (RCPT) and cyclic redundancy check (CRC) codes only. By sophisticatedly interleaving each coded frame, the packet loss can be converted into randomly punctured bits in a Turbo code. Therefore, error control in noisy channels with different types of errors is equivalent to dealing with random bit errors only, with reduced turbo code rates. A genetic algorithm-based method is presented to further reduce the optimization complexity. This proposed method not only gives a better performance than product codes in given channel conditions but is also more robust to the channel variation. Finally, to break down the error floor of turbo decoding, we further extend the above RCPT/CRC protection to a product code scheme by adding a Reed-Solomon (RS) code across the frames. The associated rate allocation is discussed and further improvement is demonstrated. PMID:18854248
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Zamora, Gilberto; Wilson, Mark; Mitra, Sunanda
2000-06-01
Existing lossless coding models yield only up to 3:1 compression. However, a much higher lossless compression can be achieved for certain medical images when the images are segmented prior to applying integer to integer wavelet transform and lossless coding. The methodology used in this research work is to apply a contour detection scheme to segment the image first. The segmented image is then wavelet transformed with integer to integer mapping to obtain a lower weighted entropy than the original. An adaptive arithmetic model is then applied to code the transformed image losslessly. For the male visible human color image set, the overall average lossless compression using the above scheme is around 10:1 whereas the compression ratio of an individual slice can be as high as 16:1. The achievable compression ratio depends on the actual bit rate of the segmented images attained by lossless coding as well as the compression obtainable from segmentation alone. The computational time required by the entire process is fast enough for application on large medical images.
Fractal frontiers in cardiovascular magnetic resonance: towards clinical implementation.
Captur, Gabriella; Karperien, Audrey L; Li, Chunming; Zemrak, Filip; Tobon-Gomez, Catalina; Gao, Xuexin; Bluemke, David A; Elliott, Perry M; Petersen, Steffen E; Moon, James C
2015-01-01
Many of the structures and parameters that are detected, measured and reported in cardiovascular magnetic resonance (CMR) have at least some properties that are fractal, meaning complex and self-similar at different scales. To date however, there has been little use of fractal geometry in CMR; by comparison, many more applications of fractal analysis have been published in MR imaging of the brain.This review explains the fundamental principles of fractal geometry, places the fractal dimension into a meaningful context within the realms of Euclidean and topological space, and defines its role in digital image processing. It summarises the basic mathematics, highlights strengths and potential limitations of its application to biomedical imaging, shows key current examples and suggests a simple route for its successful clinical implementation by the CMR community.By simplifying some of the more abstract concepts of deterministic fractals, this review invites CMR scientists (clinicians, technologists, physicists) to experiment with fractal analysis as a means of developing the next generation of intelligent quantitative cardiac imaging tools. PMID:26346700
ERIC Educational Resources Information Center
Clark, Garry
1999-01-01
Reports on a mathematical investigation of fractals and highlights the thinking involved, problem solving strategies used, generalizing skills required, the role of technology, and the role of mathematics. (ASK)
Persistence intervals of fractals
NASA Astrophysics Data System (ADS)
Máté, Gabriell; Heermann, Dieter W.
2014-07-01
Objects and structures presenting fractal like behavior are abundant in the world surrounding us. Fractal theory provides a great deal of tools for the analysis of the scaling properties of these objects. We would like to contribute to the field by analyzing and applying a particular case of the theory behind the P.H. dimension, a concept introduced by MacPherson and Schweinhart, to seek an intuitive explanation for the relation of this dimension and the fractality of certain objects. The approach is based on recently elaborated computational topology methods and it proves to be very useful for investigating scaling hidden in dimensions lower than the “native” dimension in which the investigated object is embedded. We demonstrate the applicability of the method with two examples: the Sierpinski gasket-a traditional fractal-and a two dimensional object composed of short segments arranged according to a circular structure.
Fractal funcitons and multiwavelets
Massopust, P.R.
1997-04-01
This paper reviews how elements from the theory of fractal functions are employed to construct scaling vectors and multiwavelets. Emphasis is placed on the one-dimensional case, however extensions to IR{sup m} are indicated.
ERIC Educational Resources Information Center
Bannon, Thomas J.
1991-01-01
Discussed are several different transformations based on the generation of fractals including self-similar designs, the chaos game, the koch curve, and the Sierpinski Triangle. Three computer programs which illustrate these concepts are provided. (CW)
Restoring wavefront coded iris image through the optical parameter and regularization filter
NASA Astrophysics Data System (ADS)
Li, Yingjiao; He, Yuqing; Feng, Guangqin; Hou, Yushi; Liu, Yong
2011-11-01
Wavefront coding technology can extend the depth of field of the iris imaging system, but the iris image obtained through the system is coded and blurred and can't be used for the recognition algorithm directly. The paper presents a fuzzy iris image restoration method used in the wavefront coding system. After the restoration, the images can be used for the following processing. Firstly, the wavefront coded imaging system is simulated and the optical parameter is analyzed, through the simulation we can get the system's point spread function (PSF). Secondly, using the blurred iris image and PSF to do a blind restoration to estimate a appropriate PSFe. Finally, based on the return value PSFe of PSF, applying the regularization filter on the blurred image. Experimental results show that the proposed method is simple and has fast processing speed. Compared with the traditional restoration algorithms of Wiener filtering and Lucy-Richardson filtering, the recovery image that got through the regularization filtering is the most similar with the original iris image.
A Contourlet-Based Embedded Image Coding Scheme on Low Bit-Rate
NASA Astrophysics Data System (ADS)
Song, Haohao; Yu, Songyu
Contourlet transform (CT) is a new image representation method, which can efficiently represent contours and textures in images. However, CT is a kind of overcomplete transform with a redundancy factor of 4/3. If it is applied to image compression straightforwardly, the encoding bit-rate may increase to meet a given distortion. This fact baffles the coding community to develop CT-based image compression techniques with satisfactory performance. In this paper, we analyze the distribution of significant contourlet coefficients in different subbands and propose a new contourlet-based embedded image coding (CEIC) scheme on low bit-rate. The well-known wavelet-based embedded image coding (WEIC) algorithms such as EZW, SPIHT and SPECK can be easily integrated into the proposed scheme by constructing a virtual low frequency subband, modifying the coding framework of WEIC algorithms according to the structure of contourlet coefficients, and adopting a high-efficiency significant coefficient scanning scheme for CEIC scheme. The proposed CEIC scheme can provide an embedded bit-stream, which is desirable in heterogeneous networks. Our experiments demonstrate that the proposed scheme can achieve the better compression performance on low bit-rate. Furthermore, thanks to the contourlet adopted in the proposed scheme, more contours and textures in the coded images are preserved to ensure the superior subjective quality.
NASA Astrophysics Data System (ADS)
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
NASA Technical Reports Server (NTRS)
Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.
1991-01-01
The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.
Medical Image Compression Using a New Subband Coding Method
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug
1995-01-01
A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.
NASA Astrophysics Data System (ADS)
Hou, Ying
2009-10-01
In this paper, a hyperspectral image lossy coder using three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm based on Karhunen-Loève transform (KLT) and wavelet transform (WT) is proposed. This coding scheme adopts 1D KLT as spectral decorrelator and 2D WT as spatial decorrelator. Furthermore, the computational complexity and the coding performance of the low-complexity KLT are compared and evaluated. In comparison with several stateof- the-art coding algorithms, experimental results indicate that our coder can achieve better lossy compression performance.
Finite-state residual vector quantizer for image coding
NASA Astrophysics Data System (ADS)
Huang, Steve S.; Wang, Jia-Shung
1993-10-01
Finite state vector quantization (FSVQ) has been proven during recent years to be a high quality and low bit rate coding scheme. A FSVQ has achieved the efficiency of a small codebook (the state codebook) VQ while maintaining the quality of a large codebook (the master codebook) VQ. However, the large master codebook has become a primary limitation of FSVQ if the implementation is carefully taken into account. A large amount of memory would be required in storing the master codebook and also much effort would be spent in maintaining the state codebook if the master codebook became too large. This problem could be partially solved by the mean/residual technique (MRVQ). That is, the block means and the residual vectors would be separately coded. A new hybrid coding scheme called the finite state residual vector quantization (FSRVQ) is proposed in this paper for the sake of utilizing both advantage in FSVQ and MRVQ. The codewords in FSRVQ were designed by removing the block means so as to reduce the codebook size. The block means were predicted by the neighboring blocks to reduce the bit rate. Additionally, the predicted means were added to the residual vectors so that the state codebooks could be generated entirely. The performance of FSRVQ was indicated from the experimental results to be better than that of both ordinary FSVQ and RMVQ uniformly.
Context-based lossless image compression with optimal codes for discretized Laplacian distributions
NASA Astrophysics Data System (ADS)
Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin
2003-05-01
Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.
Rheological and fractal hydrodynamics of aerobic granules.
Tijani, H I; Abdullah, N; Yuzir, A; Ujang, Zaini
2015-06-01
The structural and hydrodynamic features for granules were characterized using settling experiments, predefined mathematical simulations and ImageJ-particle analyses. This study describes the rheological characterization of these biologically immobilized aggregates under non-Newtonian flows. The second order dimensional analysis defined as D2=1.795 for native clusters and D2=1.099 for dewatered clusters and a characteristic three-dimensional fractal dimension of 2.46 depicts that these relatively porous and differentially permeable fractals had a structural configuration in close proximity with that described for a compact sphere formed via cluster-cluster aggregation. The three-dimensional fractal dimension calculated via settling-fractal correlation, U∝l(D) to characterize immobilized granules validates the quantitative measurements used for describing its structural integrity and aggregate complexity. These results suggest that scaling relationships based on fractal geometry are vital for quantifying the effects of different laminar conditions on the aggregates' morphology and characteristics such as density, porosity, and projected surface area. PMID:25836036
Imaging flow cytometer using computation and spatially coded filter
NASA Astrophysics Data System (ADS)
Han, Yuanyuan; Lo, Yu-Hwa
2016-03-01
Flow cytometry analyzes multiple physical characteristics of a large population of single cells as cells flow in a fluid stream through an excitation light beam. Flow cytometers measure fluorescence and light scattering from which information about the biological and physical properties of individual cells are obtained. Although flow cytometers have massive statistical power due to their single cell resolution and high throughput, they produce no information about cell morphology or spatial resolution offered by microscopy, which is a much wanted feature missing in almost all flow cytometers. In this paper, we invent a method of spatial-temporal transformation to provide flow cytometers with cell imaging capabilities. The method uses mathematical algorithms and a specially designed spatial filter as the only hardware needed to give flow cytometers imaging capabilities. Instead of CCDs or any megapixel cameras found in any imaging systems, we obtain high quality image of fast moving cells in a flow cytometer using photomultiplier tube (PMT) detectors, thus obtaining high throughput in manners fully compatible with existing cytometers. In fact our approach can be applied to retrofit traditional flow cytometers to become imaging flow cytometers at a minimum cost. To prove the concept, we demonstrate cell imaging for cells travelling at a velocity of 0.2 m/s in a microfluidic channel, corresponding to a throughput of approximately 1,000 cells per second.
Fractal characterization of wear-erosion surfaces
Rawers, J.; Tylczak, J.
1999-12-01
Wear erosion is a complex phenomenon resulting in highly distorted and deformed surface morphologies. Most wear surface features have been described only qualitatively. In this study wear surfaces features were quantified using fractal analysis. The ability to assign numerical values to wear-erosion surfaces makes possible mathematical expressions that will enable wear mechanisms to be predicted and understood. Surface characterization came from wear-erosion experiments that included varying the erosive materials, the impact velocity, and the impact angle. Seven fractal analytical techniques were applied to micrograph images of wear-erosion surfaces. Fourier analysis was the most promising. Fractal values obtained were consistent with visual observations and provided a unique wear-erosion parameter unrelated to wear rate. In this study stainless steel was evaluated as a function of wear erosion conditions.
``the Human BRAIN & Fractal quantum mechanics''
NASA Astrophysics Data System (ADS)
Rosary-Oyong, Se, Glory
In mtDNA ever retrieved from Iman Tuassoly, et.al:Multifractal analysis of chaos game representation images of mtDNA''.Enhances the price & valuetales of HE. Prof. Dr-Ing. B.J. HABIBIE's N-219, in J. Bacteriology, Nov 1973 sought:'' 219 exist as separate plasmidDNA species in E.coli & Salmonella panama'' related to ``the brain 2 distinct molecular forms of the (Na,K)-ATPase..'' & ``neuron maintains different concentration of ions(charged atoms'' thorough Rabi & Heisenber Hamiltonian. Further, after ``fractal space time are geometric analogue of relativistic quantum mechanics''[Ord], sought L.Marek Crnjac: ``Chaotic fractals at the root of relativistic quantum physics''& from famous Nottale: ``Scale relativity & fractal space-time:''Application to Quantum Physics , Cosmology & Chaotic systems'',1995. Acknowledgements to HE. Mr. H. TUK SETYOHADI, Jl. Sriwijaya Raya 3, South-Jakarta, INDONESIA.
Fractal characterization of wear-erosion surfaces
Rawers, James C.; Tylczak, Joseph H.
1999-12-01
Wear erosion is a complex phenomenon resulting in highly distorted and deformed surface morphologies. Most wear surface features have been described only qualitatively. In this study wear surfaces features were quantified using fractal analysis. The ability to assign numerical values to wear-erosion surfaces makes possible mathematical expressions that will enable wear mechanisms to be predicted and understood. Surface characterization came from wear-erosion experiments that included varying the erosive materials, the impact velocity, and the impact angle. Seven fractal analytical techniques were applied to micrograph images of wear-erosion surfaces. Fourier analysis was the most promising. Fractal values obtained were consistent with visual observations and provided a unique wear-erosion parameter unrelated to wear rate.
Milovanovic, Petar; Djuric, Marija; Rakocevic, Zlatko
2012-01-01
There is an increasing interest in bone nano-structure, the ultimate goal being to reveal the basis of age-related bone fragility. In this study, power spectral density (PSD) data and fractal dimensions of the mineralized bone matrix were extracted from atomic force microscope topography images of the femoral neck trabeculae. The aim was to evaluate age-dependent differences in the mineralized matrix of human bone and to consider whether these advanced nano-descriptors might be linked to decreased bone remodeling observed by some authors and age-related decline in bone mechanical competence. The investigated bone specimens belonged to a group of young adult women (n = 5, age: 20–40 years) and a group of elderly women (n = 5, age: 70–95 years) without bone diseases. PSD graphs showed the roughness density distribution in relation to spatial frequency. In all cases, there was a fairly linear decrease in magnitude of the power spectra with increasing spatial frequencies. The PSD slope was steeper in elderly individuals (−2.374 vs. −2.066), suggesting the dominance of larger surface morphological features. Fractal dimension of the mineralized bone matrix showed a significant negative trend with advanced age, declining from 2.467 in young individuals to 2.313 in the elderly (r = 0.65, P = 0.04). Higher fractal dimension in young women reflects domination of smaller mineral grains, which is compatible with the more freshly remodeled structure. In contrast, the surface patterns in elderly individuals were indicative of older tissue age. Lower roughness and reduced structural complexity (decreased fractal dimension) of the interfibrillar bone matrix in the elderly suggest a decline in bone toughness, which explains why aged bone is more brittle and prone to fractures. PMID:22946475
Image gathering and coding for digital restoration: Information efficiency and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar
1989-01-01
Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.
Self-images in the video monitor coded by monkey intraparietal neurons.
Iriki, A; Tanaka, M; Obayashi, S; Iwamura, Y
2001-06-01
When playing a video game, or using a teleoperator system, we feel our self-image projected into the video monitor as a part of or an extension of ourselves. Here we show that such a self image is coded by bimodal (somatosensory and visual) neurons in the monkey intraparietal cortex, which have visual receptive fields (RFs) encompassing their somatosensory RFs. We earlier showed these neurons to code the schema of the hand which can be altered in accordance with psychological modification of the body image; that is, when the monkey used a rake as a tool to extend its reach, the visual RFs of these neurons elongated along the axis of the tool, as if the monkey's self image extended to the end of the tool. In the present experiment, we trained monkeys to recognize their image in a video monitor (despite the earlier general belief that monkeys are not capable of doing so), and demonstrated that the visual RF of these bimodal neurons was now projected onto the video screen so as to code the image of the hand as an extension of the self. Further, the coding of the imaged hand could intentionally be altered to match the image artificially modified in the monitor. PMID:11377755
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes. PMID:25807568
Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi
2016-01-01
Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach. PMID:26999150
Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi
2016-01-01
Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach. PMID:26999150
Fractal characteristics for binary noise radar waveform
NASA Astrophysics Data System (ADS)
Li, Bing C.
2016-05-01
Noise radars have many advantages over conventional radars and receive great attentions recently. The performance of a noise radar is determined by its waveforms. Investigating characteristics of noise radar waveforms has significant value for evaluating noise radar performance. In this paper, we use binomial distribution theory to analyze general characteristics of binary phase coded (BPC) noise waveforms. Focusing on aperiodic autocorrelation function, we demonstrate that the probability distributions of sidelobes for a BPC noise waveform depend on the distances of these sidelobes to the mainlobe. The closer a sidelobe to the mainlobe, the higher the probability for this sidelobe to be a maximum sidelobe. We also develop Monte Carlo framework to explore the characteristics that are difficult to investigate analytically. Through Monte Carlo experiments, we reveal the Fractal relationship between the code length and the maximum sidelobe value for BPC waveforms, and propose using fractal dimension to measure noise waveform performance.
ITU-T T.851: an enhanced entropy coding design for JPEG baseline images
NASA Astrophysics Data System (ADS)
Mitchell, Joan L.; Hinds, Arianne T.
2008-08-01
The Joint Photographic Experts Group (JPEG) baseline standard remains a popular and pervasive standard for continuous tone, still image coding. The "J" in JPEG acknowledges its two main parent organizations, ISO (International Organization for Standardization) and the ITU-T (International Telecommunications Union - Telecommunication). Notwithstanding their joint efforts, both groups have subsequently (and separately) standardized many improvements for still image coding. Recently, the ITU-T Study Group 16 completed the standardization for a new entropy coder - called the Q15-coder, whose statistical model is from the original JPEG-1 standard. This new standard, ITU-T Rec. T.851, can be used in lieu of the traditional Huffman (a form of variable length coding) entropy coder, and complements the QM arithmetic coder, both originally standardized in JPEG as ITU-T T.81 | ISO/IEC 10918:1. In contrast to Huffman entropy coding, arithmetic coding makes no assumptions about an image's statistics, but rather responds in real time. This paper will present a tutorial on arithmetic coding, provide a history of arithmetic coding in JPEG, share the motivation for T.851, outline its changes, and provide comparison results with both the baseline Huffman and the original QM-coder entropy coders. It will conclude with suggestions for future work.
NASA Astrophysics Data System (ADS)
Choras, Ryszard S.
1983-03-01
The paper presents a numerical techniques of transform image coding for the image codklg for the image bandwidth compression. Unitary transformations called Hadamard, Haar and Hadamard-Haar transformations are definied and developed. 'Te described the construction of the transformation matrices and presents algorithms for computation of the transformations and theirs inverse. Considered transformations are asolied to iiaa e processing and theirs utility and effectiveness are compared with other discrete transforms on the basic of some standard performance criteria.
Preliminary study of coded-source-based neutron imaging at the CPHS
NASA Astrophysics Data System (ADS)
Li, Yuanji; Huang, Zhifeng; Chen, Zhiqiang; Kang, Kejun; Xiao, Yongshun; Wang, Xuewu; Wei, Jie; Loong, C.-K.
2011-09-01
A cold neutron radiography/tomography instrument is under construction at the Compact Pulsed Hadron Source (CPHS) at Tsinghua University, China. The neutron flux is so low that an acceptable neutron radiographic image requires a long exposure time in the single-hole imaging mode. The coded-source-based imaging technique is helpful to increase the utilization of neutron flux to reduce the exposure time without loss in spatial resolution and provides high signal-to-noise ratio (SNR) images. Here we report a preliminary study on the feasibility of coded-source-based technique applied to the cold neutron imaging with a low-brilliance neutron source at the CPHS. A proper coded aperture is designed to be used in the beamline instead of the single-hole aperture. Two image retrieval algorithms, the Wiener filter algorithm and the Richardson-Lucy algorithm, are evaluated by using analytical and Monte Carlo simulations. The simulation results reveal that the coded source imaging technique is suitable for the CPHS to partially solve the problem of low neutron flux.
Feature point based image watermarking with insertions, deletions, and substitution codes
NASA Astrophysics Data System (ADS)
Bardyn, Dieter; Belet, Philippe; Dams, Tim; Dooms, Ann; Schelkens, Peter
2010-01-01
In this paper we concentrate on robust image watermarking (i.e. capable of resisting common signal processing operations and intentional attacks to destroy the watermark) based on image features. Kutter et al.7 motivated that well chosen image features survive admissible image distortions and hence can benefit the watermarking process. These image features are used as location references for the region in which the watermark is embedded. To realize the latter, we make use of previous work16 where a ring-shaped region, centered around an image feature is determined for watermark embedding. We propose to choose a specific sequence of image features according to strict criteria so that the image features have large distance to other chosen image features so that the ring shaped embedding regions do not overlap. Nevertheless, such a setup remains prone to insertion, deletion and substitution errors. Therefore we applied a two-step coding scheme similar to the one employed by Coumou and Sharma4 for speech watermarking. Our contribution here lies in extending Coumou and Sharma's one dimensional scheme to the two dimensional setup that is associated with our watermarking technique. The two-step coding scheme concatenates an outer Reed-Solomon error-correction code with an inner, blind, synchronization mechanism.
The fractal structure of the mitochondrial genomes
NASA Astrophysics Data System (ADS)
Oiwa, Nestor N.; Glazier, James A.
2002-08-01
The mitochondrial DNA genome has a definite multifractal structure. We show that loops, hairpins and inverted palindromes are responsible for this self-similarity. We can thus establish a definite relation between the function of subsequences and their fractal dimension. Intriguingly, protein coding DNAs also exhibit palindromic structures, although they do not appear in the sequence of amino acids. These structures may reflect the stabilization and transcriptional control of DNA or the control of posttranscriptional editing of mRNA.
Building Fractal Models with Manipulatives.
ERIC Educational Resources Information Center
Coes, Loring
1993-01-01
Uses manipulative materials to build and examine geometric models that simulate the self-similarity properties of fractals. Examples are discussed in two dimensions, three dimensions, and the fractal dimension. Discusses how models can be misleading. (Contains 10 references.) (MDH)
Fractal Patterns and Chaos Games
ERIC Educational Resources Information Center
Devaney, Robert L.
2004-01-01
Teachers incorporate the chaos game and the concept of a fractal into various areas of the algebra and geometry curriculum. The chaos game approach to fractals provides teachers with an opportunity to help students comprehend the geometry of affine transformations.
NASA Astrophysics Data System (ADS)
Oleshko, Klaudia; de Jesús Correa López, María; Romero, Alejandro; Ramírez, Victor; Pérez, Olga
2016-04-01
The effectiveness of fractal toolbox to capture the scaling or fractal probability distribution, and simply fractal statistics of main hydrocarbon reservoir attributes, was highlighted by Mandelbrot (1995) and confirmed by several researchers (Zhao et al., 2015). Notwithstanding, after more than twenty years, i&tacute;s still common the opinion that fractals are not useful for the petroleum engineers and especially for Geoengineering (Corbett, 2012). In spite of this negative background, we have successfully applied the fractal and multifractal techniques to our project entitled "Petroleum Reservoir as a Fractal Reactor" (2013 up to now). The distinguishable feature of Fractal Reservoir is the irregular shapes and rough pore/solid distributions (Siler, 2007), observed across a broad range of scales (from SEM to seismic). At the beginning, we have accomplished the detailed analysis of Nelson and Kibler (2003) Catalog of Porosity and Permeability, created for the core plugs of siliciclastic rocks (around ten thousand data were compared). We enriched this Catalog by more than two thousand data extracted from the last ten years publications on PoroPerm (Corbett, 2012) in carbonates deposits, as well as by our own data from one of the PEMEX, Mexico, oil fields. The strong power law scaling behavior was documented for the major part of these data from the geological deposits of contrasting genesis. Based on these results and taking into account the basic principles and models of the Physics of Fractals, introduced by Per Back and Kan Chen (1989), we have developed new software (Muuḱil Kaab), useful to process the multiscale geological and geophysical information and to integrate the static geological and petrophysical reservoiŕ models to dynamic ones. The new type of fractal numerical model with dynamical power law relations among the shapes and sizes of mes&hacute; cells was designed and calibrated in the studied area. The statistically sound power law relations were
NASA Astrophysics Data System (ADS)
Oleshko, Klaudia; de Jesús Correa López, María; Romero, Alejandro; Ramírez, Victor; Pérez, Olga
2016-04-01
The effectiveness of fractal toolbox to capture the scaling or fractal probability distribution, and simply fractal statistics of main hydrocarbon reservoir attributes, was highlighted by Mandelbrot (1995) and confirmed by several researchers (Zhao et al., 2015). Notwithstanding, after more than twenty years, it's still common the opinion that fractals are not useful for the petroleum engineers and especially for Geoengineering (Corbett, 2012). In spite of this negative background, we have successfully applied the fractal and multifractal techniques to our project entitled "Petroleum Reservoir as a Fractal Reactor" (2013 up to now). The distinguishable feature of Fractal Reservoir is the irregular shapes and rough pore/solid distributions (Siler, 2007), observed across a broad range of scales (from SEM to seismic). At the beginning, we have accomplished the detailed analysis of Nelson and Kibler (2003) Catalog of Porosity and Permeability, created for the core plugs of siliciclastic rocks (around ten thousand data were compared). We enriched this Catalog by more than two thousand data extracted from the last ten years publications on PoroPerm (Corbett, 2012) in carbonates deposits, as well as by our own data from one of the PEMEX, Mexico, oil fields. The strong power law scaling behavior was documented for the major part of these data from the geological deposits of contrasting genesis. Based on these results and taking into account the basic principles and models of the Physics of Fractals, introduced by Per Back and Kan Chen (1989), we have developed new software (Muukíl Kaab), useful to process the multiscale geological and geophysical information and to integrate the static geological and petrophysical reservoir models to dynamic ones. The new type of fractal numerical model with dynamical power law relations among the shapes and sizes of mesh' cells was designed and calibrated in the studied area. The statistically sound power law relations were established
DCT/DST-based transform coding for intra prediction in image/video coding.
Saxena, Ankur; Fernandes, Felix C
2013-10-01
In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences. PMID:23744679
Entanglement entropy on fractals
NASA Astrophysics Data System (ADS)
Faraji Astaneh, Amin
2016-03-01
We use the heat kernel method to calculate the entanglement entropy for a given entangling region on a fractal. The leading divergent term of the entropy is obtained as a function of the fractal dimension as well as the walk dimension. The power of the UV cutoff parameter is (generally) a fractional number, which, indeed, is a certain combination of these two indices. This exponent is known as the spectral dimension. We show that there is a novel log-periodic oscillatory behavior in the expression of entropy which has root in the complex dimension of the fractal. We finally indicate that the holographic calculation in a certain hyperscaling-violating bulk geometry yields the same leading term for the entanglement entropy, if one identifies the effective dimension of the hyperscaling-violating theory with the spectral dimension of the fractal. We provide additional support by comparing the behavior of the thermal entropy in terms of the temperature, computed for two geometries, the fractal geometry and the hyperscaling-violating background.
Fractal dynamics of earthquakes
Bak, P.; Chen, K.
1995-05-01
Many objects in nature, from mountain landscapes to electrical breakdown and turbulence, have a self-similar fractal spatial structure. It seems obvious that to understand the origin of self-similar structures, one must understand the nature of the dynamical processes that created them: temporal and spatial properties must necessarily be completely interwoven. This is particularly true for earthquakes, which have a variety of fractal aspects. The distribution of energy released during earthquakes is given by the Gutenberg-Richter power law. The distribution of epicenters appears to be fractal with dimension D {approx} 1--1.3. The number of after shocks decay as a function of time according to the Omori power law. There have been several attempts to explain the Gutenberg-Richter law by starting from a fractal distribution of faults or stresses. But this is a hen-and-egg approach: to explain the Gutenberg-Richter law, one assumes the existence of another power-law--the fractal distribution. The authors present results of a simple stick slip model of earthquakes, which evolves to a self-organized critical state. Emphasis is on demonstrating that empirical power laws for earthquakes indicate that the Earth`s crust is at the critical state, with no typical time, space, or energy scale. Of course the model is tremendously oversimplified; however in analogy with equilibrium phenomena they do not expect criticality to depend on details of the model (universality).
Edge extraction of optical subaperture based on fractal dimension method
NASA Astrophysics Data System (ADS)
Wang, Yunqi; Hui, Mei; Liu, Ming; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin
2015-09-01
Optical synthetic aperture imaging technology is an effective approach to increase the aperture diameter of optical system for purpose of improving resolution. In optical synthetic aperture imaging system, the edge is more complex than that of traditional optical imaging system, and the relatively large size of the gaps between the subapertures makes cophasing a difficult problem. So it is significant to extract edge phase of each subaperture for achieving phase stitching and avoiding the loss of effective frequency. Fractal dimension as a measure feature of image surface irregularities can statistically evaluate the complexity which is consistent with human visual image perception of rough surface texture. Therefore, fractal dimension provides a powerful tool to describe surface characteristics of image and can be applied to edge extraction. In our research, the box-counting dimension was used to calculate fractal dimension of the whole image. Then the calculated fractal dimension is mapped to grayscale image. The region with large fractal dimension represents a sharper change of the gray scale in original image, which was accurately extracted as the edge region. Subaperture region and interference fringe edge was extracted from interference pattern of optical subaperture, which has laid the foundation for the subaperture edge phase detection in the future work.
Information theoretical assessment of image gathering and coding for digital restoration
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; John, Sarah; Reichenbach, Stephen E.
1990-01-01
The process of image-gathering, coding, and restoration is presently treated in its entirety rather than as a catenation of isolated tasks, on the basis of the relationship between the spectral information density of a transmitted signal and the restorability of images from the signal. This 'information-theoretic' assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system's design and radiance-field statistics, as well as for the information efficiency and data compression that are obtainable through the combination of image gathering with coding to reduce signal redundancy. It is found that high information efficiency is achievable only through minimization of image-gathering degradation as well as signal redundancy.
Non-Uniform Contrast and Noise Correction for Coded Source Neutron Imaging
Santos-Villalobos, Hector J; Bingham, Philip R
2012-01-01
Since the first application of neutron radiography in the 1930s, the field of neutron radiography has matured enough to develop several applications. However, advances in the technology are far from concluded. In general, the resolution of scintillator-based detection systems is limited to the $10\\mu m$ range, and the relatively low neutron count rate of neutron sources compared to other illumination sources restricts time resolved measurement. One path toward improved resolution is the use of magnification; however, to date neutron optics are inefficient, expensive, and difficult to develop. There is a clear demand for cost-effective scintillator-based neutron imaging systems that achieve resolutions of $1 \\mu m$ or less. Such imaging system would dramatically extend the application of neutron imaging. For such purposes a coded source imaging system is under development. The current challenge is to reduce artifacts in the reconstructed coded source images. Artifacts are generated by non-uniform illumination of the source, gamma rays, dark current at the imaging sensor, and system noise from the reconstruction kernel. In this paper, we describe how to pre-process the coded signal to reduce noise and non-uniform illumination, and how to reconstruct the coded signal with three reconstruction methods correlation, maximum likelihood estimation, and algebraic reconstruction technique. We illustrates our results with experimental examples.
IRMA Code II: unique annotation of medical images for access and retrieval.
Piesch, Tim-Christian; Müller, Henning; Kuhl, Christiane K; Deserno, Thomas M
2012-01-01
Content-based image retrieval (CBIR) provides novel options to access large repositories of medical images, in particular for storing, querying and reporting. This requires a revisit of nomenclatures for image classification such as DICOM, SNOMED, and RadLex. For instance, DICOM defines only about 20 concept terms for body regions, which partly overlap. This is insufficient to access the visual image characteristics. In 2002, the Image Retrieval in Medical Applications (IRMA) project proposed a mono-hierarchic, multi-axial coding scheme called IRMA Code. It was used in the Cross Language Evaluation Forum (ImageCLEF) annotation tasks. Ten years of experience have discovered several weak points. In this paper, we propose eight axes of three levels in hierarchy for (A) anatomy, (B) biological system, (C) configuration, (D) direction, (E) equipment, (F) finding, (G) generation, and (H) human maneuver as well as additional flags for age class, body side, contrast agent, ethnicity, finding certainty, gender, quality, and scanned film, which are captured in form of another axis (I). Using a tag-based notation IRMA Code II supports multiple selection coding within one axis, which is required for the new main categories. PMID:22874172
Research on signal-to-noise ratio characteristics and image restoration for wavefront coding
NASA Astrophysics Data System (ADS)
Wu, Yijian; Zhao, Yuejin; Guo, Xiaohu; Dong, Liquan; Jia, Wei; Liu, Ming; Zhao, Ji; Liu, Yun
2015-09-01
Wavefront coding, a technique of optical-digital hybrid image, can be used to extend the depth of the field. However, it sacrifices the signal-to-noise ratio (SNR) of system at a certain degree, especially on focus situation. The on-focus modulation transfer function (MTF) of wavefront coding system is much lower than that of generally traditional optical system. And the noise will be amplified in the digital image processing. This paper analyzes characteristics of the SNR of the wavefront coding system in the frequency domain and calculates the rate of noise amplification in the digital processing. It also explains the influence of the image detector noise severely reducing the restored quality of images. In order to reduce noise amplification in the process of image restoration, we propose a modified wiener filter which is more suitable for restoration in consideration of noise suppression. The simulation experiment demonstrates that the modified wiener filter, compared with traditional wiener filter, has much better performance for wavefront coding system and the restored images having much higher SNR in the whole depth of the field.
Image enhancement using MCNP5 code and MATLAB in neutron radiography.
Tharwat, Montaser; Mohamed, Nader; Mongy, T
2014-07-01
This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work. PMID:24583508
Lensless coded-aperture imaging with separable Doubly-Toeplitz masks
NASA Astrophysics Data System (ADS)
DeWeert, Michael J.; Farm, Brian P.
2015-02-01
In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within a reasonable weight and volume. One solution is to use coded apertures-opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, then the images can be decoded and a clear image computed. Recently, computational imaging and the search for a means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, modified uniformly redundant arrays (MURAs), are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: (1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and (2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. We present a new class of coded apertures, separable Doubly-Toeplitz masks, which are efficiently decodable even for very large images-orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-light-modulators. Imaging experiments confirmed the effectiveness of separable Doubly-Toeplitz masks-images collected in natural light of extended outdoor scenes are rendered clearly.
NASA Astrophysics Data System (ADS)
Cael, B. B.; Lambert, Bennett; Bisson, Kelsey
2015-11-01
Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.
Pond fractals in a tidal flat.
Cael, B B; Lambert, Bennett; Bisson, Kelsey
2015-11-01
Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces. PMID:26651668
Image coding with uniform and piecewise-uniform vector quantizers.
Jeong, D G; Gibson, J D
1995-01-01
New lattice vector quantizer design procedures for nonuniform sources that yield excellent performance while retaining the structure required for fast quantization are described. Analytical methods for truncating and scaling lattices to be used in vector quantization are given, and an analytical technique for piecewise-linear multidimensional companding is presented. The uniform and piecewise-uniform lattice vector quantizers are then used to quantize the discrete cosine transform coefficients of images, and their objective and subjective performance and complexity are contrasted with other lattice vector quantizers and with LBG training-mode designs. PMID:18289966
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm. PMID:24710404
[Recent progress of research and applications of fractal and its theories in medicine].
Cai, Congbo; Wang, Ping
2014-10-01
Fractal, a mathematics concept, is used to describe an image of self-similarity and scale invariance. Some organisms have been discovered with the fractal characteristics, such as cerebral cortex surface, retinal vessel structure, cardiovascular network, and trabecular bone, etc. It has been preliminarily confirmed that the three-dimensional structure of cells cultured in vitro could be significantly enhanced by bionic fractal surface. Moreover, fractal theory in clinical research will help early diagnosis and treatment of diseases, reducing the patient's pain and suffering. The development process of diseases in the human body can be expressed by the fractal theories parameter. It is of considerable significance to retrospectively review the preparation and application of fractal surface and its diagnostic value in medicine. This paper gives an application of fractal and its theories in the medical science, based on the research achievements in our laboratory. PMID:25764741
The influence of the property of random coded patterns on fluctuation-correlation ghost imaging
NASA Astrophysics Data System (ADS)
Wang, Chenglong; Gong, Wenlin; Shao, Xuehui; Han, Shensheng
2016-06-01
According to the reconstruction feature of fluctuation-correlation ghost imaging (GI), we define a normalized characteristic matrix and the influence of the property of random coded patterns on GI is investigated based on the theory of matrix analysis. Both simulative and experimental results demonstrate that for different random coded patterns, the quality of fluctuation-correlation GI can be predicted by some parameters extracted from the normalized characteristic matrix, which suggests its potential application in the optimization of random coded patterns for GI system.
Guo, Jing-Ming; Prasetyo, Heri
2015-03-01
This paper presents a technique for content-based image retrieval (CBIR) by exploiting the advantage of low-complexity ordered-dither block truncation coding (ODBTC) for the generation of image content descriptor. In the encoding step, ODBTC compresses an image block into corresponding quantizers and bitmap image. Two image features are proposed to index an image, namely, color co-occurrence feature (CCF) and bit pattern features (BPF), which are generated directly from the ODBTC encoded data streams without performing the decoding process. The CCF and BPF of an image are simply derived from the two ODBTC quantizers and bitmap, respectively, by involving the visual codebook. Experimental results show that the proposed method is superior to the block truncation coding image retrieval systems and the other earlier methods, and thus prove that the ODBTC scheme is not only suited for image compression, because of its simplicity, but also offers a simple and effective descriptor to index images in CBIR system. PMID:25420264
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %. PMID:15625058
Fractal dimensions of sinkholes
NASA Astrophysics Data System (ADS)
Reams, Max W.
1992-05-01
Sinkhole perimeters are probably fractals ( D=1.209-1.558) for sinkholes with areas larger than 10,000 m 2, based on area-perimeter plots of digitized data from karst surfaces developed on six geologic units in the United States. The sites in Florida, Kentucky, Indiana and Missouri were studied using maps with a scale of 1:24, 000. Size-number distributions of sinkhole perimeters and areas may also be fractal, although data for small sinkholes is needed for verification. Studies based on small-scale maps are needed to evaluate the number and roughness of small sinkhole populations.
A coded-aperture technique allowing x-ray phase contrast imaging with conventional sources
Olivo, Alessandro; Speller, Robert
2007-08-13
Phase contrast imaging (PCI) solves the basic limitation of x-ray imaging, i.e., poor image contrast resulting from small absorption differences. Up to now, it has been mostly limited to synchrotron radiation facilities, due to the stringent requirements on the x-ray source and detectors, and only one technique was shown to provide PCI images with conventional sources but with limits in practical implementation. The authors propose a different approach, based on coded apertures, which provides high PCI signals with conventional sources and detectors and imposes practically no applicability limits. They expect this method to cast the basis of a widespread diffusion of PCI.
Somayaji, Manjunath; Bhakta, Vikrant R; Christensen, Marc P
2012-01-16
The optical transfer function of a cubic phase mask wavefront coding imaging system is experimentally measured across the entire range of defocus values encompassing the system's functional limits. The results are compared against mathematical expressions describing the spatial frequency response of these computational imagers. Experimental data shows that the observed modulation and phase transfer functions, available spatial frequency bandwidth and design range of this imaging system strongly agree with previously published mathematical analyses. An imaging system characterization application is also presented wherein it is shown that the phase transfer function is more robust than the modulation transfer function in estimating the strength of the cubic phase mask. PMID:22274533
The Classification of HEp-2 Cell Patterns Using Fractal Descriptor.
Xu, Rudan; Sun, Yuanyuan; Yang, Zhihao; Song, Bo; Hu, Xiaopeng
2015-07-01
Indirect immunofluorescence (IIF) with HEp-2 cells is considered as a powerful, sensitive and comprehensive technique for analyzing antinuclear autoantibodies (ANAs). The automatic classification of the HEp-2 cell images from IIF has played an important role in diagnosis. Fractal dimension can be used on the analysis of image representing and also on the property quantification like texture complexity and spatial occupation. In this study, we apply the fractal theory in the application of HEp-2 cell staining pattern classification, utilizing fractal descriptor firstly in the HEp-2 cell pattern classification with the help of morphological descriptor and pixel difference descriptor. The method is applied to the data set of MIVIA and uses the support vector machine (SVM) classifier. Experimental results show that the fractal descriptor combining with morphological descriptor and pixel difference descriptor makes the precisions of six patterns more stable, all above 50%, achieving 67.17% overall accuracy at best with relatively simple feature vectors. PMID:26011888
A new pad-based neutron detector for stereo coded aperture thermal neutron imaging
NASA Astrophysics Data System (ADS)
Dioszegi, I.; Yu, B.; Smith, G.; Schaknowski, N.; Fried, J.; Vanier, P. E.; Salwen, C.; Forman, L.
2014-09-01
A new coded aperture thermal neutron imager system has been developed at Brookhaven National Laboratory. The cameras use a new type of position-sensitive 3He-filled ionization chamber, in which an anode plane is composed of an array of pads with independent acquisition channels. The charge is collected on each of the individual 5x5 mm2 anode pads, (48x48 in total, corresponding to 24x24 cm2 sensitive area) and read out by application specific integrated circuits (ASICs). The new design has several advantages for coded-aperture imaging applications in the field, compared to the previous generation of wire-grid based neutron detectors. Among these are its rugged design, lighter weight and use of non-flammable stopping gas. The pad-based readout occurs in parallel circuits, making it capable of high count rates, and also suitable to perform data analysis and imaging on an event-by-event basis. The spatial resolution of the detector can be better than the pixel size by using a charge sharing algorithm. In this paper we will report on the development and performance of the new pad-based neutron camera, describe a charge sharing algorithm to achieve sub-pixel spatial resolution and present the first stereoscopic coded aperture images of thermalized neutron sources using the new coded aperture thermal neutron imager system.
Adaptive coded aperture imaging in the infrared: towards a practical implementation
NASA Astrophysics Data System (ADS)
Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley
2008-08-01
An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.
Computer vision for detecting and quantifying gamma-ray sources in coded-aperture images
Schaich, P.C.; Clark, G.A.; Sengupta, S.K.; Ziock, K.P.
1994-11-02
The authors report the development of an automatic image analysis system that detects gamma-ray source regions in images obtained from a coded aperture, gamma-ray imager. The number of gamma sources in the image is not known prior to analysis. The system counts the number (K) of gamma sources detected in the image and estimates the lower bound for the probability that the number of sources in the image is K. The system consists of a two-stage pattern classification scheme in which the Probabilistic Neural Network is used in the supervised learning mode. The algorithms were developed and tested using real gamma-ray images from controlled experiments in which the number and location of depleted uranium source disks in the scene are known.
Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging.
Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin
2015-12-01
We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205
Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging
Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin
2015-01-01
We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205
Quantum image pseudocolor coding based on the density-stratified method
NASA Astrophysics Data System (ADS)
Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na
2015-05-01
Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.
Context-Aware and Locality-Constrained Coding for Image Categorization
2014-01-01
Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts. PMID:24977215
Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei; Chen Da; Li Zhenghong; Wu Yuelei
2012-11-15
Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.
Coded apertures allow high-energy x-ray phase contrast imaging with laboratory sources
NASA Astrophysics Data System (ADS)
Ignatyev, K.; Munro, P. R. T.; Chana, D.; Speller, R. D.; Olivo, A.
2011-07-01
This work analyzes the performance of the coded-aperture based x-ray phase contrast imaging approach, showing that it can be used at high x-ray energies with acceptable exposure times. Due to limitations in the used source, we show images acquired at tube voltages of up to 100 kVp, however, no intrinsic reason indicates that the method could not be extended to even higher energies. In particular, we show quantitative agreement between the contrast extracted from the experimental x-ray images and the theoretical one, determined by the behavior of the material's refractive index as a function of energy. This proves that all energies in the used spectrum contribute to the image formation, and also that there are no additional factors affecting image contrast as the x-ray energy is increased. We also discuss the method flexibility by displaying and analyzing the first set of images obtained while varying the relative displacement between coded-aperture sets, which leads to image variations to some extent similar to those observed when changing the crystal angle in analyzer-based imaging. Finally, we discuss the method's possible advantages in terms of simplification of the set-up, scalability, reduced exposure times, and complete achromaticity. We believe this would helpful in applications requiring the imaging of highly absorbing samples, e.g., material science and security inspection, and, in the way of example, we demonstrate a possible application in the latter.
Wang, Xiaogang; Chen, Wen; Chen, Xudong
2015-03-01
In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system. PMID:25836845
ERIC Educational Resources Information Center
Camp, Dane R.
1991-01-01
After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…
ERIC Educational Resources Information Center
Marks, Tim K.
1992-01-01
Presents a three-lesson unit that uses fractal geometry to measure the coastline of Massachusetts. Two lessons provide hands-on activities utilizing compass and grid methods to perform the measurements and the third lesson analyzes and explains the results of the activities. (MDH)
Hsü, K J; Hsü, A J
1990-01-01
Music critics have compared Bach's music to the precision of mathematics. What "mathematics" and what "precision" are the questions for a curious scientist. The purpose of this short note is to suggest that the mathematics is, at least in part, Mandelbrot's fractal geometry and the precision is the deviation from a log-log linear plot. PMID:11607061
Joint sparse coding based spatial pyramid matching for classification of color medical image.
Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin
2015-04-01
Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. PMID:24976104
Flames in fractal grid generated turbulence
NASA Astrophysics Data System (ADS)
Goh, K. H. H.; Geipel, P.; Hampp, F.; Lindstedt, R. P.
2013-12-01
Twin premixed turbulent opposed jet flames were stabilized for lean mixtures of air with methane and propane in fractal grid generated turbulence. A density segregation method was applied alongside particle image velocimetry to obtain velocity and scalar statistics. It is shown that the current fractal grids increase the turbulence levels by around a factor of 2. Proper orthogonal decomposition (POD) was applied to show that the fractal grids produce slightly larger turbulent structures that decay at a slower rate as compared to conventional perforated plates. Conditional POD (CPOD) was also implemented using the density segregation technique and the results show that CPOD is essential to segregate the relative structures and turbulent kinetic energy distributions in each stream. The Kolmogorov length scales were also estimated providing values ∼0.1 and ∼0.5 mm in the reactants and products, respectively. Resolved profiles of flame surface density indicate that a thin flame assumption leading to bimodal statistics is not perfectly valid under the current conditions and it is expected that the data obtained will be of significant value to the development of computational methods that can provide information on the conditional structure of turbulence. It is concluded that the increase in the turbulent Reynolds number is without any negative impact on other parameters and that fractal grids provide a route towards removing the classical problem of a relatively low ratio of turbulent to bulk strain associated with the opposed jet configuration.
A Fractal Perspective on Scale in Geography
NASA Astrophysics Data System (ADS)
Jiang, Bin; Brandt, S.
2016-06-01
Scale is a fundamental concept that has attracted persistent attention in geography literature over the past several decades. However, it creates enormous confusion and frustration, particularly in the context of geographic information science, because of scale-related issues such as image resolution, and the modifiable areal unit problem (MAUP). This paper argues that the confusion and frustration mainly arise from Euclidean geometric thinking, with which locations, directions, and sizes are considered absolute, and it is time to reverse this conventional thinking. Hence, we review fractal geometry, together with its underlying way of thinking, and compare it to Euclidean geometry. Under the paradigm of Euclidean geometry, everything is measurable, no matter how big or small. However, geographic features, due to their fractal nature, are essentially unmeasurable or their sizes depend on scale. For example, the length of a coastline, the area of a lake, and the slope of a topographic surface are all scale-dependent. Seen from the perspective of fractal geometry, many scale issues, such as the MAUP, are inevitable. They appear unsolvable, but can be dealt with. To effectively deal with scale-related issues, we introduce topological and scaling analyses based on street-related concepts such as natural streets, street blocks, and natural cities. We further contend that spatial heterogeneity, or the fractal nature of geographic features, is the first and foremost effect of two spatial properties, because it is general and universal across all scales. Keywords: Scaling, spatial heterogeneity, conundrum of length, MAUP, topological analysis
NASA Technical Reports Server (NTRS)
Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron
2011-01-01
A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.
A low-rate fingerprinting code and its application to blind image fingerprinting
NASA Astrophysics Data System (ADS)
Jourdas, Jean-Francois; Moulin, Pierre
2008-02-01
In fingerprinting, a signature, unique to each user, is embedded in each distributed copy of a multimedia content, in order to identify potential illegal redistributors. This paper investigates digital fingerprinting problems involving millions of users and a handful of colluders. In such problems the rate of the fingerprinting code is often well below fingerprinting capacity, and the use of codes with large minimum distance emerges as a natural design. However, optimal decoding is a formidable computational problem. We investigate a design based on a Reed-Solomon outer code modulated onto an orthonormal constellation, and the Guruswami-Sudan decoding algorithm. We analyze the potential and limitations of this scheme and assess its performance by means of Monte-Carlo simulations. In the second part of this paper, we apply this scheme to a blind image fingerprinting problem, using a linear cancellation technique for embedding in the wavelet domain. Dramatic improvements are obtained over previous blind image fingerprinting algorithms.
Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun
2015-08-20
We propose a method for visualizing three-dimensional objects in scattering media. Our method is based on active illumination using three-dimensionally coded patterns and a numerical algorithm employing a sparsity constraint. We experimentally demonstrated the proposed imaging method for test charts located three-dimensionally at different depths in the space behind a translucent sheet. PMID:26368767
Low-memory-usage image coding with line-based wavelet transform
NASA Astrophysics Data System (ADS)
Ye, Linning; Guo, Jiangling; Nutter, Brian; Mitra, Sunanda
2011-02-01
When compared to the traditional row-column wavelet transform, the line-based wavelet transform can achieve significant memory savings. However, the design of an image codec using the line-based wavelet transform is an intricate task because of the irregular order in which the wavelet coefficients are generated. The independent block coding feature of JPEG2000 makes it work effectively with the line-based wavelet transform. However, with wavelet tree-based image codecs, such as set partitioning in hierarchical trees, the memory usage of the codecs does not realize significant advantage with the line-based wavelet transform because many wavelet coefficients must be buffered before the coding starts. In this paper, the line-based wavelet transform was utilized to facilitate backward coding of wavelet trees (BCWT). Although the BCWT algorithm is a wavelet tree-based algorithm, its coding order differs from that of the traditional wavelet tree-based algorithms, which allows the proposed line-based image codec to become more memory efficient than other line-based image codecs, including line-based JPEG2000, while still offering comparable rate distortion performance and much lower system complexity.
Micro and MACRO Fractals Generated by Multi-Valued Dynamical Systems
NASA Astrophysics Data System (ADS)
Banakh, T.; Novosad, N.
2014-08-01
Given a multi-valued function Φ : X \\mumap X on a topological space X we study the properties of its fixed fractal \\malteseΦ, which is defined as the closure of the orbit Φω(*Φ) = ⋃n∈ωΦn(*Φ) of the set *Φ = {x ∈ X : x ∈ Φ(x)} of fixed points of Φ. A special attention is paid to the duality between micro-fractals and macro-fractals, which are fixed fractals \\maltese Φ and \\maltese {Φ -1} for a contracting compact-valued function Φ : X \\mumap X on a complete metric space X. With help of algorithms (described in this paper) we generate various images of macro-fractals which are dual to some well-known micro-fractals like the fractal cross, the Sierpiński triangle, Sierpiński carpet, the Koch curve, or the fractal snowflakes. The obtained images show that macro-fractals have a large-scale fractal structure, which becomes clearly visible after a suitable zooming.
Discrimination of retinal images containing bright lesions using sparse coded features and SVM.
Sidibé, Désiré; Sadek, Ibrahim; Mériaudeau, Fabrice
2015-07-01
Diabetic Retinopathy (DR) is a chronic progressive disease of the retinal microvasculature which is among the major causes of vision loss in the world. The diagnosis of DR is based on the detection of retinal lesions such as microaneurysms, exudates and drusen in retinal images acquired by a fundus camera. However, bright lesions such as exudates and drusen share similar appearances while being signs of different diseases. Therefore, discriminating between different types of lesions is of interest for improving screening performances. In this paper, we propose to use sparse coding techniques for retinal images classification. In particular, we are interested in discriminating between retinal images containing either exudates or drusen, and normal images free of lesions. Extensive experiments show that dictionary learning techniques can capture strong structures of retinal images and produce discriminant descriptors for classification. In particular, using a linear SVM with the obtained sparse coded features, the proposed method achieves superior performance as compared with the popular Bag-of-Visual-Word approach for image classification. Experiments with a dataset of 828 retinal images collected from various sources show that the proposed approach provides excellent discrimination results for normal, drusen and exudates images. It achieves a sensitivity and a specificity of 96.50% and 97.70% for the normal class; 99.10% and 100% for the drusen class; and 97.40% and 98.20% for the exudates class with a medium size dictionary of 100 atoms. PMID:25935125
NASA Astrophysics Data System (ADS)
Cusanno, F.; Cisbani, E.; Colilli, S.; Fratoni, R.; Garibaldi, F.; Giuliani, F.; Gricia, M.; Lo Meo, S.; Lucentini, M.; Magliozzi, M. L.; Santavenere, F.; Lanza, R. C.; Majewski, S.; Cinti, M. N.; Pani, R.; Pellegrini, R.; Orsini Cancelli, V.; De Notaristefani, F.; Bollini, D.; Navarria, F.; Moschini, G.
2006-12-01
Molecular imaging with radionuclides is a very sensitive technique because it allows to obtain images with nanomolar or picomolar concentrations. This has generated a rapid growth of interest in radionuclide imaging of small animals. Indeed radiolabeling of small molecules, antibodies, peptides and probes for gene expression enables molecular imaging in vivo, but only if a suitable imaging system is used. Detecting small tumors in humans is another important application of such techniques. In single gamma imaging, there is always a well known tradeoff between spatial resolution and sensitivity due to unavoidable collimation requirements. Limitation of the sensitivity due to collimation is well known and affects the performance of imaging systems, especially if only radiopharmaceuticals with limited uptake are available. In many cases coded aperture collimation can provide a solution, if the near field artifact effect can be eliminated or limited. At least this is the case for "small volumes" imaging, involving small animals. In this paper 3D-laminography simulations and preliminary measurements with coded aperture collimation are presented. Different masks have been designed for different applications showing the advantages of the technique in terms of sensitivity and spatial resolution. The limitations of the technique are also discussed.
Investigation on Coding Method of Dental X-ray Image for Integrated Hospital Information System
NASA Astrophysics Data System (ADS)
Seki, Takashi; Hamamoto, Kazuhiko
Recently, medical information system in dental field goes into digital system. In the system, X-ray image can be taken in digital modality and input to the system directly. Consequently, it is easy to combine the image data with alpha-numerical data which are stored in the conventional medical information system. It is useful to manipulate alpha-numerical data and image data simultaneously. The purpose of this research is to develop a new coding method for dental X-ray image. The method enables to reduce a disk space to store the images and transmit the images through Internet or LAN lightly. I attempt to apply multi-resolution analysis (wavelet transform) to accomplish the purpose. Proposed method achieves low bit-rate compared with conventional method.
Fractal characteristics of fracture morphology of steels irradiated with high-energy ions
NASA Astrophysics Data System (ADS)
Xian, Yongqiang; Liu, Juan; Zhang, Chonghong; Chen, Jiachao; Yang, Yitao; Zhang, Liqing; Song, Yin
2015-06-01
A fractal analysis of fracture surfaces of steels (a ferritic/martensitic steel and an oxide-dispersion-strengthened ferritic steel) before and after the irradiation with high-energy ions is presented. Fracture surfaces were acquired from a tensile test and a small-ball punch test (SP). Digital images of the fracture surfaces obtained from scanning electron microscopy (SEM) were used to calculate the fractal dimension (FD) by using the pixel covering method. Boundary of binary image and fractal dimension were determined with a MATLAB program. The results indicate that fractal dimension can be an effective parameter to describe the characteristics of fracture surfaces before and after irradiation. The rougher the fracture surface, the larger the fractal dimension. Correlation of the change of fractal dimension with the embrittlement of the irradiated steels is discussed.
Coded aperture coherent scatter imaging for breast cancer detection: a Monte Carlo evaluation
NASA Astrophysics Data System (ADS)
Lakshmanan, Manu N.; Morris, Robert E.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.
2016-03-01
It is known that conventional x-ray imaging provides a maximum contrast between cancerous and healthy fibroglandular breast tissues of 3% based on their linear x-ray attenuation coefficients at 17.5 keV, whereas coherent scatter signal provides a maximum contrast of 19% based on their differential coherent scatter cross sections. Therefore in order to exploit this potential contrast, we seek to evaluate the performance of a coded- aperture coherent scatter imaging system for breast cancer detection and investigate its accuracy using Monte Carlo simulations. In the simulations we modeled our experimental system, which consists of a raster-scanned pencil beam of x-rays, a bismuth-tin coded aperture mask comprised of a repeating slit pattern with 2-mm periodicity, and a linear-array of 128 detector pixels with 6.5-keV energy resolution. The breast tissue that was scanned comprised a 3-cm sample taken from a patient-based XCAT breast phantom containing a tomosynthesis- based realistic simulated lesion. The differential coherent scatter cross section was reconstructed at each pixel in the image using an iterative reconstruction algorithm. Each pixel in the reconstructed image was then classified as being either air or the type of breast tissue with which its normalized reconstructed differential coherent scatter cross section had the highest correlation coefficient. Comparison of the final tissue classification results with the ground truth image showed that the coded aperture imaging technique has a cancerous pixel detection sensitivity (correct identification of cancerous pixels), specificity (correctly ruling out healthy pixels as not being cancer) and accuracy of 92.4%, 91.9% and 92.0%, respectively. Our Monte Carlo evaluation of our experimental coded aperture coherent scatter imaging system shows that it is able to exploit the greater contrast available from coherently scattered x-rays to increase the accuracy of detecting cancerous regions within the breast.
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression
Experimental design and analysis of JND test on coded image/video
NASA Astrophysics Data System (ADS)
Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay
2015-09-01
The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.
NASA Astrophysics Data System (ADS)
Slinger, Christopher W.; Bennett, Charlotte R.; Dyer, Gavin; Gilholm, Kevin; Gordon, Neil; Huckridge, David; McNie, Mark; Penney, Richard W.; Proudler, Ian K.; Rice, Kevin; Ridley, Kevin D.; Russell, Lee; de Villiers, Geoffrey D.; Watson, Philip J.
2011-09-01
There is an increasingly important requirement for day and night, wide field of view imaging and tracking for both imaging and sensing applications. Applications include military, security and remote sensing. We describe the development of a proof of concept demonstrator of an adaptive coded-aperture imager operating in the mid-wave infrared to address these requirements. This consists of a coded-aperture mask, a set of optics and a 4k x 4k focal plane array (FPA). This system can produce images with a resolution better than that achieved by the detector pixel itself (i.e. superresolution) by combining multiple frames of data recorded with different coded-aperture mask patterns. This superresolution capability has been demonstrated both in the laboratory and in imaging of real-world scenes, the highest resolution achieved being ½ the FPA pixel pitch. The resolution for this configuration is currently limited by vibration and theoretically ¼ pixel pitch should be possible. Comparisons have been made between conventional and ACAI solutions to these requirements and show significant advantages in size, weight and cost for the ACAI approach.
Coded aperture systems as non-conventional lensless imagers for the visible and infrared
NASA Astrophysics Data System (ADS)
Slinger, Chris; Gordon, Neil; Lewis, Keith; McDonald, Gregor; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; De Villiers, Geoff; Wilson, Rebecca
2007-10-01
Coded aperture imaging (CAI) has been used extensively at gamma- and X-ray wavelengths, where conventional refractive and reflective techniques are impractical. CAI works by coding optical wavefronts from a scene using a patterned aperture, detecting the resulting intensity distribution, then using inverse digital signal processing to reconstruct an image. This paper will consider application of CAI to the visible and IR bands. Doing so has a number of potential advantages over existing imaging approaches at these longer wavelengths, including low mass, low volume, zero aberrations and distortions and graceful failure modes. Adaptive coded aperture (ACAI), facilitated by the use of a reconfigurable mask in a CAI configuration, adds further merits, an example being the ability to implement agile imaging modes with no macroscopic moving parts. However, diffraction effects must be considered and photon flux reductions can have adverse consequences on the image quality achievable. An analysis of these benefits and limitations is described, along with a description of a novel micro optical electro mechanical (MOEMS) microshutter technology for use in thermal band infrared ACAI systems. Preliminary experimental results are also presented.
Source-Search Sensitivity of a Large-Area, Coded-Aperture, Gamma-Ray Imager
Ziock, K P; Collins, J W; Craig, W W; Fabris, L; Lanza, R C; Gallagher, S; Horn, B P; Madden, N W; Smith, E; Woodring, M L
2004-10-27
We have recently completed a large-area, coded-aperture, gamma-ray imager for use in searching for radiation sources. The instrument was constructed to verify that weak point sources can be detected at considerable distances if one uses imaging to overcome fluctuations in the natural background. The instrument uses a rank-19, one-dimensional coded aperture to cast shadow patterns onto a 0.57 m{sup 2} NaI(Tl) detector composed of 57 individual cubes each 10 cm on a side. These are arranged in a 19 x 3 array. The mask is composed of four-centimeter thick, one-meter high, 10-cm wide lead blocks. The instrument is mounted in the back of a small truck from which images are obtained as one drives through a region. Results of first measurements obtained with the system are presented.
Fractal dimension based corneal fungal infection diagnosis
NASA Astrophysics Data System (ADS)
Balasubramanian, Madhusudhanan; Perkins, A. Louise; Beuerman, Roger W.; Iyengar, S. Sitharama
2006-08-01
We present a fractal measure based pattern classification algorithm for automatic feature extraction and identification of fungus associated with an infection of the cornea of the eye. A white-light confocal microscope image of suspected fungus exhibited locally linear and branching structures. The pixel intensity variation across the width of a fungal element was gaussian. Linear features were extracted using a set of 2D directional matched gaussian-filters. Portions of fungus profiles that were not in the same focal plane appeared relatively blurred. We use gaussian filters of standard deviation slightly larger than the width of a fungus to reduce discontinuities. Cell nuclei of cornea and nerves also exhibited locally linear structure. Cell nuclei were excluded by their relatively shorter lengths. Nerves in the cornea exhibited less branching compared with the fungus. Fractal dimensions of the locally linear features were computed using a box-counting method. A set of corneal images with fungal infection was used to generate class-conditional fractal measure distributions of fungus and nerves. The a priori class-conditional densities were built using an adaptive-mixtures method to reflect the true nature of the feature distributions and improve the classification accuracy. A maximum-likelihood classifier was used to classify the linear features extracted from test corneal images as 'normal' or 'with fungal infiltrates', using the a priori fractal measure distributions. We demonstrate the algorithm on the corneal images with culture-positive fungal infiltrates. The algorithm is fully automatic and will help diagnose fungal keratitis by generating a diagnostic mask of locations of the fungal infiltrates.
Coded aperture x-ray diffraction imaging with transmission computed tomography side-information
NASA Astrophysics Data System (ADS)
Odinaka, Ikenna; Greenberg, Joel A.; Kaganovsky, Yan; Holmgren, Andrew; Hassan, Mehadi; Politte, David G.; O'Sullivan, Joseph A.; Carin, Lawrence; Brady, David J.
2016-03-01
Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.
Park, Jinhyoung; Lee, Jungwoo; Lau, Sien Ting; Lee, Changyang; Huang, Ying; Lien, Ching-Ling; Kirk Shung, K
2012-04-01
Acoustic radiation force impulse (ARFI) imaging has been developed as a non-invasive method for quantitative illustration of tissue stiffness or displacement. Conventional ARFI imaging (2-10 MHz) has been implemented in commercial scanners for illustrating elastic properties of several organs. The image resolution, however, is too coarse to study mechanical properties of micro-sized objects such as cells. This article thus presents a high-frequency coded excitation ARFI technique, with the ultimate goal of displaying elastic characteristics of cellular structures. Tissue mimicking phantoms and zebrafish embryos are imaged with a 100-MHz lithium niobate (LiNbO₃) transducer, by cross-correlating tracked RF echoes with the reference. The phantom results show that the contrast of ARFI image (14 dB) with coded excitation is better than that of the conventional ARFI image (9 dB). The depths of penetration are 2.6 and 2.2 mm, respectively. The stiffness data of the zebrafish demonstrate that the envelope is harder than the embryo region. The temporal displacement change at the embryo and the chorion is as large as 36 and 3.6 μm. Consequently, this high-frequency ARFI approach may serve as a remote palpation imaging tool that reveals viscoelastic properties of small biological samples. PMID:22101757
Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging
NASA Astrophysics Data System (ADS)
Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong
2015-02-01
Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.
A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018
Fractal Particles: Titan's Thermal Structure and IR Opacity
NASA Technical Reports Server (NTRS)
McKay, C. P.; Rannou, P.; Guez, L.; Young, E. F.; DeVincenzi, Donald (Technical Monitor)
1998-01-01
Titan's haze particles are the principle opacity at solar wavelengths. Most past work in modeling these particles has assumed spherical particles. However, observational evidence strongly favors fractal shapes for the haze particles. We consider the implications of fractal particles for the thermal structure and near infrared opacity of Titan's atmosphere. We find that assuming fractal particles with the optical properties based on laboratory tholin material and with a production rate that allows for a match to the geometric albedo results in warmer troposphere and surface temperatures compared to spherical particles. In the near infrared (1-3 microns) the predicted opacity of the fractal particles is up to a factor of two less than for spherical particles. This has implications for the ability of Cassini to image Titan's surface at 1 micron.
Compressed X-ray phase-contrast imaging using a coded source
NASA Astrophysics Data System (ADS)
Sung, Yongjin; Xu, Ling; Nagarkar, Vivek; Gupta, Rajiv
2014-12-01
X-ray phase-contrast imaging (XPCI) holds great promise for medical X-ray imaging with high soft-tissue contrast. Obviating optical elements in the imaging chain, propagation-based XPCI (PB-XPCI) has definite advantages over other XPCI techniques in terms of cost, alignment and scalability. However, it imposes strict requirements on the spatial coherence of the source and the resolution of the detector. In this study, we demonstrate that using a coded X-ray source and sparsity-based reconstruction, we can significantly relax these requirements. Using numerical simulation, we assess the feasibility of our approach and study the effect of system parameters on the reconstructed image. The results are demonstrated with images obtained using a bench-top micro-focus XPCI system.
An improved Huffman coding method for archiving text, images, and music characters in DNA.
Ailenberg, Menachem; Rotstein, Ori
2009-09-01
An improved Huffman coding method for information storage in DNA is described. The method entails the utilization of modified unambiguous base assignment that enables efficient coding of characters. A plasmid-based library with efficient and reliable information retrieval and assembly with uniquely designed primers is described. We illustrate our approach by synthesis of DNA that encodes text, images, and music, which could easily be retrieved by DNA sequencing using the specific primers. The method is simple and lends itself to automated information retrieval. PMID:19852760
Nanoparticle-dispersed metamaterial sensors for adaptive coded aperture imaging applications
NASA Astrophysics Data System (ADS)
Nehmetallah, Georges; Banerjee, Partha; Aylo, Rola; Rogers, Stanley
2011-09-01
We propose tunable single-layer and multi-layer (periodic and with defect) structures comprising nanoparticle dispersed metamaterials in suitable hosts, including adaptive coded aperture constructs, for possible Adaptive Coded Aperture Imaging (ACAI) applications such as in microbolometry, pressure/temperature sensors, and directed energy transfer, over a wide frequency range, from visible to terahertz. These structures are easy to fabricate, are low-cost and tunable, and offer enhanced functionality, such as perfect absorption (in the case of bolometry) and low cross-talk (for sensors). Properties of the nanoparticle dispersed metamaterial are determined using effective medium theory.
Fractals in geology and geophysics
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1989-01-01
The definition of a fractal distribution is that the number of objects N with a characteristic size greater than r scales with the relation N of about r exp -D. The frequency-size distributions for islands, earthquakes, fragments, ore deposits, and oil fields often satisfy this relation. This application illustrates a fundamental aspect of fractal distributions, scale invariance. The requirement of an object to define a scale in photograhs of many geological features is one indication of the wide applicability of scale invariance to geological problems; scale invariance can lead to fractal clustering. Geophysical spectra can also be related to fractals; these are self-affine fractals rather than self-similar fractals. Examples include the earth's topography and geoid.
Huang, F.; Peng, R. D.; Liu, Y. H.; Chen, Z. Y.; Ye, M. F.; Wang, L.
2012-09-15
Fractal dust grains of different shapes are observed in a radially confined magnetized radio frequency plasma. The fractal dimensions of the dust structures in two-dimensional (2D) horizontal dust layers are calculated, and their evolution in the dust growth process is investigated. It is found that as the dust grains grow the fractal dimension of the dust structure decreases. In addition, the fractal dimension of the center region is larger than that of the entire region in the 2D dust layer. In the initial growth stage, the small dust particulates at a high number density in a 2D layer tend to fill space as a normal surface with fractal dimension D = 2. The mechanism of the formation of fractal dust grains is discussed.
NASA Astrophysics Data System (ADS)
Latka, Miroslaw; Glaubic-Latka, Marta; Latka, Dariusz; West, Bruce J.
2004-04-01
We study the middle cerebral artery blood flow velocity (MCAfv) in humans using transcranial Doppler ultrasonography (TCD). Scaling properties of time series of the axial flow velocity averaged over a cardiac beat interval may be characterized by two exponents. The short time scaling exponent (STSE) determines the statistical properties of fluctuations of blood flow velocities in short-time intervals while the Hurst exponent describes the long-term fractal properties. In many migraineurs the value of the STSE is significantly reduced and may approach that of the Hurst exponent. This change in dynamical properties reflects the significant loss of short-term adaptability and the overall hyperexcitability of the underlying cerebral blood flow control system. We call this effect fractal rigidity.
Fractal polyzirconosiloxane cluster coatings
Sugama, T.
1992-08-01
Fractal polyzirconosiloxane (PZS) cluster films were prepared through the hydrolysis-polycondensation-pyrolysis synthesis of two-step HCl acid-NaOH base catalyzed sol precursors consisting of N-[3-(triethoxysilyl)propyl]-4,5-dihydroimidazole, Zr(OC{sub 3}H{sub 7}){sub 4}, methanol, and water. When amorphous PZSs were applied to aluminum as protective coatings against NaCl-induced corrosion, the effective film was that derived from the sol having a pH near the isoelectric point in the positive zeta potential region. The following four factors played an important role in assembling the protective PZS coating films: (1) a proper rate of condensation, (2) a moderate ratio of Si-O-Si to Si-O-Zr linkages formed in the PZS network, (3) hydrophobic characteristics, and (4) a specific microstructural geometry, in which large fractal clusters were linked together.
Fractal polyzirconosiloxane cluster coatings
Sugama, T.
1992-01-01
Fractal polyzirconosiloxane (PZS) cluster films were prepared through the hydrolysis-polycondensation-pyrolysis synthesis of two-step HCl acid-NaOH base catalyzed sol precursors consisting of N-(3-(triethoxysilyl)propyl)-4,5-dihydroimidazole, Zr(OC{sub 3}H{sub 7}){sub 4}, methanol, and water. When amorphous PZSs were applied to aluminum as protective coatings against NaCl-induced corrosion, the effective film was that derived from the sol having a pH near the isoelectric point in the positive zeta potential region. The following four factors played an important role in assembling the protective PZS coating films: (1) a proper rate of condensation, (2) a moderate ratio of Si-O-Si to Si-O-Zr linkages formed in the PZS network, (3) hydrophobic characteristics, and (4) a specific microstructural geometry, in which large fractal clusters were linked together.
Wang Xujing; Becker, Frederick F.; Gascoyne, Peter R. C.
2010-12-15
The scale-invariant property of the cytoplasmic membrane of biological cells is examined by applying the Minkowski-Bouligand method to digitized scanning electron microscopy images of the cell surface. The membrane is found to exhibit fractal behavior, and the derived fractal dimension gives a good description of its morphological complexity. Furthermore, we found that this fractal dimension correlates well with the specific membrane dielectric capacitance derived from the electrorotation measurements. Based on these findings, we propose a new fractal single-shell model to describe the dielectrics of mammalian cells, and compare it with the conventional single-shell model (SSM). We found that while both models fit with experimental data well, the new model is able to eliminate the discrepancy between the measured dielectric property of cells and that predicted by the SSM.
NASA Astrophysics Data System (ADS)
Zhang, Yongfei; Cao, Haiheng; Jiang, Hongxu; Li, Bo
2016-04-01
As remote sensing image applications are often characterized with limited bandwidth and high-quality demands, higher coding performance of remote sensing images are desirable. The embedded block coding with optimal truncation (EBCOT) is the fundamental part of JPEG2000 image compression standard. However, EBCOT only considers correlation within a sub-band and utilizes a context template of eight spatially neighboring coefficients in prediction. The existing optimization methods in literature using the current context template prove little performance improvements. To address this problem, this paper presents a new mutual information (MI)-based context template selection and modeling method. By further considering the correlation across the sub-bands, the potential prediction coefficients, including neighbors, far neighbors, parent and parent neighbors, are comprehensively examined and selected in such a manner that achieves a nice trade-off between the MI-based correlation criterion and the prediction complexity. Based on the selected context template, a high-order prediction model, which jointly considers the weight and the significance state of each coefficient, is proposed. Experimental results show that the proposed algorithm consistently outperforms the benchmark JPEG2000 standard and state-of-the-art algorithms in term of coding efficiency at a competitive computational cost, which makes it desirable in real-time compression applications, especially for remote sensing images.
Optimised Post-Exposure Image Sharpening Code for L3-CCD Detectors
Harding, Leon K.; Butler, Raymond F.; Redfern, R. Michael; Sheehan, Brendan J.; McDonald, James
2008-02-22
As light from celestial bodies traverses Earth's atmosphere, the wavefronts are distorted by atmospheric turbulence, thereby lowering the angular resolution of ground-based imaging. Rapid time-series imaging enables Post-Exposure Image Sharpening (PEIS) techniques, which employ shift-and-add frame registration to remove the tip-tilt component of the wavefront error--as well as telescope wobble, thus benefiting all observations. Further resolution gains are possible by selecting only frames with the best instantaneous seeing--a technique sometimes calling 'Lucky Imaging'. We implemented these techniques in the 1990s, with the TRIFFID imaging photon-counting camera, and its associated data reduction software. The software was originally written for time-tagged photon-list data formats, recorded by detectors such as the MAMA. This paper describes our deep re-structuring of the software to handle the 2-d FITS images produced by Low Light Level CCD (L3-CCD) cameras, which have sufficient time-series resolution (>30 Hz) for PEIS. As before, our code can perform straight frame co-addition, use composite reference stars, perform PEIS under several different algorithms to determine the tip/tilt shifts, store 'quality' and shift information for each frame, perform frame selection, and generate exposure-maps for photometric correction. In addition, new code modules apply all 'static' calibrations (bias subtraction, dark subtraction and flat-fielding) to the frames immediately prior to the other algorithms. A unique feature of our PEIS/Lucky Imaging code is the use of bidirectional wiener-filtering. Coupled with the far higher sensitivity of the L3-CCD over the previous TRIFFID detectors, much fainter reference stars and much narrower time windows can be used.
Piecewise spectrally band-pass for compressive coded aperture spectral imaging
NASA Astrophysics Data System (ADS)
Qian, Lu-Lu; Lü, Qun-Bo; Huang, Min; Xiang, Li-Bin
2015-08-01
Coded aperture snapshot spectral imaging (CASSI) has been discussed in recent years. It has the remarkable advantages of high optical throughput, snapshot imaging, etc. The entire spatial-spectral data-cube can be reconstructed with just a single two-dimensional (2D) compressive sensing measurement. On the other hand, for less spectrally sparse scenes, the insufficiency of sparse sampling and aliasing in spatial-spectral images reduce the accuracy of reconstructed three-dimensional (3D) spectral cube. To solve this problem, this paper extends the improved CASSI. A band-pass filter array is mounted on the coded mask, and then the first image plane is divided into some continuous spectral sub-band areas. The entire 3D spectral cube could be captured by the relative movement between the object and the instrument. The principle analysis and imaging simulation are presented. Compared with peak signal-to-noise ratio (PSNR) and the information entropy of the reconstructed images at different numbers of spectral sub-band areas, the reconstructed 3D spectral cube reveals an observable improvement in the reconstruction fidelity, with an increase in the number of the sub-bands and a simultaneous decrease in the number of spectral channels of each sub-band. Project supported by the National Natural Science Foundation for Distinguished Young Scholars of China (Grant No. 61225024) and the National High Technology Research and Development Program of China (Grant No. 2011AA7012022).
NASA Astrophysics Data System (ADS)
Shinoda, Kazuma; Murakami, Yuri; Yamaguchi, Masahiro; Ohyama, Nagaaki
2010-01-01
In this paper we propose a multispectral image compression based on lossy to lossless coding, suitable for both spectral and color reproduction. The proposed method divides a multispectral image data into two groups, RGB and residual. The RGB component is extracted from the multispectral image, for example, by using the XYZ Color Matching Functions, a color conversion matrix, and a gamma curve. The original multispectral image is estimated from RGB data encoder, and the difference between the original and the estimated multispectral images, referred as a residual component in this paper, is calculated in the encoder. Then the RGB and the residual components are encoded by JPEG2000, respectively a progressive decoding is possible from the losslessly encoded code-stream. Experimental results show that, although the proposed method is slightly inferior to JPEG2000 with a multicomponent transform in rate-distortion plot of the spectrum domain at low bit rate, a decoded RGB image shows high quality at low bit rate with primary encoding of the RGB component. Its lossless compression ratio is close to that of JPEG2000 with the integer KLT.
Noise reduction of VQ encoded images through anti-gray coding.
Kuo, C J; Lin, C H; Yeh, C H
1999-01-01
Noise reduction of VQ encoded images is achieved through the proposed anti-gray coding (AGC) and noise detection and correction scheme. In AGC, binary indices are assigned to the codevector in such a way that the 1-b neighbors of a code vector are as far apart as possible. To detect the channel errors, we first classify an image into uniform and edge regions. Then we propose a mask to detect the channel errors based on the image classification (uniform or edge region) and the characteristics of AGC. We also mathematically derive a criterion for error detection based on the image classification. Once error indices are detected, the recovered indices can be easily chosen from a "candidate set" by minimizing the gray-level transition across the block boundaries in a VQ encoded image. Simulation results show that the proposed technique provides detection results with smaller than 0.1% probability of error and more than 86.3% probability of detection at a random bit error rate of 0.1%, while the undetected errors are invisible. In addition, the proposed detection and correction techniques improve the image quality (compared with that encoded by AGC) by 3.9 dB. PMID:18262863
Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.
Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K
2014-02-01
Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution. PMID:24356347
The Calculation of Fractal Dimension in the Presence of Non-Fractal Clutter
NASA Technical Reports Server (NTRS)
Herren, Kenneth A.; Gregory, Don A.
1999-01-01
The area of information processing has grown dramatically over the last 50 years. In the areas of image processing and information storage the technology requirements have far outpaced the ability of the community to meet demands. The need for faster recognition algorithms and more efficient storage of large quantities of data has forced the user to accept less than lossless retrieval of that data for analysis. In addition to clutter that is not the object of interest in the data set, often the throughput requirements forces the user to accept "noisy" data and to tolerate the clutter inherent in that data. It has been shown that some of this clutter, both the intentional clutter (clouds, trees, etc) as well as the noise introduced on the data by processing requirements can be modeled as fractal or fractal-like. Traditional methods using Fourier deconvolution on these sources of noise in frequency space leads to loss of signal and can, in many cases, completely eliminate the target of interest. The parameters that characterize fractal-like noise (predominately the fractal dimension) have been investigated and a technique to reduce or eliminate noise from real scenes has been developed. Examples of clutter reduced images are presented.
Fractal multifiber microchannel plates
NASA Technical Reports Server (NTRS)
Cook, Lee M.; Feller, W. B.; Kenter, Almus T.; Chappell, Jon H.
1992-01-01
The construction and performance of microchannel plates (MCPs) made using fractal tiling mehtods are reviewed. MCPs with 40 mm active areas having near-perfect channel ordering were produced. These plates demonstrated electrical performance characteristics equivalent to conventionally constructed MCPs. These apparently are the first MCPs which have a sufficiently high degree of order to permit single channel addressability. Potential applications for these devices and the prospects for further development are discussed.
NASA Astrophysics Data System (ADS)
Martin, Demetri
2015-03-01
Demetri Maritn prepared this palindromic poem as his project for Michael Frame's fractal geometry class at Yale. Notice the first, fourth, and seventh words in the second and next-to-second lines are palindromes, the first two and last two lines are palindromes, the middle line, "Be still if I fill its ebb" minus its last letter is a palindrome, and the entire poem is a palindrome...
NASA Astrophysics Data System (ADS)
Burdzy, Krzysztof; Hołyst, Robert; Pruski, Łukasz
2013-05-01
We investigate a process of random walks of a point particle on a two-dimensional square lattice of size n×n with periodic boundary conditions. A fraction p⩽20% of the lattice is occupied by holes (p represents macroporosity). A site not occupied by a hole is occupied by an obstacle. Upon a random step of the walker, a number of obstacles, M, can be pushed aside. The system approaches equilibrium in (nlnn)2 steps. We determine the distribution of M pushed in a single move at equilibrium. The distribution F(M) is given by Mγ where γ=-1.18 for p=0.1, decreasing to γ=-1.28 for p=0.01. Irrespective of the initial distribution of holes on the lattice, the final equilibrium distribution of holes forms a fractal with fractal dimension changing from a=1.56 for p=0.20 to a=1.42 for p=0.001 (for n=4,000). The trace of a random walker forms a distribution with expected fractal dimension 2.
Darwinian Evolution and Fractals
NASA Astrophysics Data System (ADS)
Carr, Paul H.
2009-05-01
Did nature's beauty emerge by chance or was it intelligently designed? Richard Dawkins asserts that evolution is blind aimless chance. Michael Behe believes, on the contrary, that the first cell was intelligently designed. The scientific evidence is that nature's creativity arises from the interplay between chance AND design (laws). Darwin's ``Origin of the Species,'' published 150 years ago in 1859, characterized evolution as the interplay between variations (symbolized by dice) and the natural selection law (design). This is evident in recent discoveries in DNA, Madelbrot's Fractal Geometry of Nature, and the success of the genetic design algorithm. Algorithms for generating fractals have the same interplay between randomness and law as evolution. Fractal statistics, which are not completely random, characterize such phenomena such as fluctuations in the stock market, the Nile River, rainfall, and tree rings. As chaos theorist Joseph Ford put it: God plays dice, but the dice are loaded. Thus Darwin, in discovering the evolutionary interplay between variations and natural selection, was throwing God's dice!
Fractals in physiology and medicine
NASA Technical Reports Server (NTRS)
Goldberger, Ary L.; West, Bruce J.
1987-01-01
The paper demonstrates how the nonlinear concepts of fractals, as applied in physiology and medicine, can provide an insight into the organization of such complex structures as the tracheobronchial tree and heart, as well as into the dynamics of healthy physiological variability. Particular attention is given to the characteristics of computer-generated fractal lungs and heart and to fractal pathologies in these organs. It is shown that alterations in fractal scaling may underlie a number of pathophysiological disturbances, including sudden cardiac death syndromes.
Dimension of fractal basin boundaries
Park, B.S.
1988-01-01
In many dynamical systems, multiple attractors coexist for certain parameter ranges. The set of initial conditions that asymptotically approach each attractor is its basin of attraction. These basins can be intertwined on arbitrary small scales. Basin boundary can be either smooth or fractal. Dynamical systems that have fractal basin boundary show final state sensitivity of the initial conditions. A measure of this sensitivity (uncertainty exponent {alpha}) is related to the dimension of the basin boundary d = D - {alpha}, where D is the dimension of the phase space and d is the dimension of the basin boundary. At metamorphosis values of the parameter, there might happen a conversion from smooth to fractal basin boundary (smooth-fractal metamorphosis) or a conversion from fractal to another fractal basin boundary characteristically different from the previous fractal one (fractal-fractal metamorphosis). The dimension changes continuously with the parameter except at the metamorphosis values where the dimension of the basin boundary jumps discontinuously. We chose the Henon map and the forced damped pendulum to investigate this. Scaling of the basin volumes near the metamorphosis values of the parameter is also being studied for the Henon map. Observations are explained analytically by using low dimensional model map.
Asymmetric neural coding revealed by in vivo calcium imaging in the honey bee brain.
Rigosi, Elisa; Haase, Albrecht; Rath, Lisa; Anfora, Gianfranco; Vallortigara, Giorgio; Szyszka, Paul
2015-03-22
Left-right asymmetries are common properties of nervous systems. Although lateralized sensory processing has been well studied, information is lacking about how asymmetries are represented at the level of neural coding. Using in vivo functional imaging, we identified a population-level left-right asymmetry in the honey bee's primary olfactory centre, the antennal lobe (AL). When both antennae were stimulated via a frontal odour source, the inter-odour distances between neural response patterns were higher in the right than in the left AL. Behavioural data correlated with the brain imaging results: bees with only their right antenna were better in discriminating a target odour in a cross-adaptation paradigm. We hypothesize that the differences in neural odour representations in the two brain sides serve to increase coding capacity by parallel processing. PMID:25673679
Joint source/channel coding for image transmission with JPEG2000 over memoryless channels.
Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W
2005-08-01
The high compression efficiency and various features provided by JPEG2000 make it attractive for image transmission purposes. A novel joint source/channel coding scheme tailored for JPEG2000 is proposed in this paper to minimize the end-to-end image distortion within a given total transmission rate through memoryless channels. It provides unequal error protection by combining the forward error correction capability from channel codes and the error detection/localization functionality from JPEG2000 in an effective way. The proposed scheme generates quality scalable and error-resilient codestreams. It gives competitive performance with other existing schemes for JPEG2000 in the matched channel condition case and provides more graceful quality degradation for mismatched cases. Furthermore, both fixed-length source packets and fixed-length channel packets can be efficiently formed with the same algorithm. PMID:16121451
Asymmetric neural coding revealed by in vivo calcium imaging in the honey bee brain
Rigosi, Elisa; Haase, Albrecht; Rath, Lisa; Anfora, Gianfranco; Vallortigara, Giorgio; Szyszka, Paul
2015-01-01
Left–right asymmetries are common properties of nervous systems. Although lateralized sensory processing has been well studied, information is lacking about how asymmetries are represented at the level of neural coding. Using in vivo functional imaging, we identified a population-level left–right asymmetry in the honey bee's primary olfactory centre, the antennal lobe (AL). When both antennae were stimulated via a frontal odour source, the inter-odour distances between neural response patterns were higher in the right than in the left AL. Behavioural data correlated with the brain imaging results: bees with only their right antenna were better in discriminating a target odour in a cross-adaptation paradigm. We hypothesize that the differences in neural odour representations in the two brain sides serve to increase coding capacity by parallel processing. PMID:25673679
Measurements with Pinhole and Coded Aperture Gamma-Ray Imaging Systems
Raffo-Caiado, Ana Claudia; Solodov, Alexander A; Abdul-Jabbar, Najeb M; Hayward, Jason P; Ziock, Klaus-Peter
2010-01-01
From a safeguards perspective, gamma-ray imaging has the potential to reduce manpower and cost for effectively locating and monitoring special nuclear material. The purpose of this project was to investigate the performance of pinhole and coded aperture gamma-ray imaging systems at Oak Ridge National Laboratory (ORNL). With the aid of the European Commission Joint Research Centre (JRC), radiometric data will be combined with scans from a three-dimensional design information verification (3D-DIV) system. Measurements were performed at the ORNL Safeguards Laboratory using sources that model holdup in radiological facilities. They showed that for situations with moderate amounts of solid or dense U sources, the coded aperture was able to predict source location and geometry within ~7% of actual values while the pinhole gave a broad representation of source distributions
Buck, R M; Hall, J M
1999-06-01
COG is a major multiparticle simulation code in the LLNL Monte Carlo radiation transport toolkit. It was designed to solve deep-penetration radiation shielding problems in arbitrarily complex 3D geometries, involving coupled transport of photons, neutrons, and electrons. COG was written to provide as much accuracy as the underlying cross-sections will allow, and has a number of variance-reduction features to speed computations. Recently COG has been applied to the simulation of high- resolution radiographs of complex objects and the evaluation of contraband detection schemes. In this paper we will give a brief description of the capabilities of the COG transport code and show several examples of neutron and gamma-ray imaging simulations. Keywords: Monte Carlo, radiation transport, simulated radiography, nonintrusive inspection, neutron imaging.
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from
The Use of Fractals for the Study of the Psychology of Perception:
NASA Astrophysics Data System (ADS)
Mitina, Olga V.; Abraham, Frederick David
The present article deals with perception of time (subjective assessment of temporal intervals), complexity and aesthetic attractiveness of visual objects. The experimental research for construction of functional relations between objective parameters of fractals' complexity (fractal dimension and Lyapunov exponent) and subjective perception of their complexity was conducted. As stimulus material we used the program based on Sprott's algorithms for the generation of fractals and the calculation of their mathematical characteristics. For the research 20 fractals were selected which had different fractal dimensions that varied from 0.52 to 2.36, and the Lyapunov exponent from 0.01 to 0.22. We conducted two experiments: (1) A total of 20 fractals were shown to 93 participants. The fractals were displayed on the screen of a computer for randomly chosen time intervals ranging from 5 to 20 s. For each fractal displayed, the participant responded with a rating of the complexity and attractiveness of the fractal using ten-point scale with an estimate of the duration of the presentation of the stimulus. Each participant also answered the questions of some personality tests (Cattell and others). The main purpose of this experiment was the analysis of the correlation between personal characteristics and subjective perception of complexity, attractiveness, and duration of fractal's presentation. (2) The same 20 fractals were shown to 47 participants as they were forming on the screen of the computer for a fixed interval. Participants also estimated subjective complexity and attractiveness of fractals. The hypothesis on the applicability of the Weber-Fechner law for the perception of time, complexity and subjective attractiveness was confirmed for measures of dynamical properties of fractal images.
Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens
2013-01-01
The limitations in neutron flux and resolution (L/D) of current neutron imaging systems can be addressed with a Coded Source Imaging system with magnification (xCSI). More precisely, the multiple sources in an xCSI system can exceed the flux of a single pinhole system for several orders of magnitude, while maintaining a higher L/D with the small sources. Moreover, designing for an xCSI system reduces noise from neutron scattering, because the object is placed away from the detector to achieve magnification. However, xCSI systems are adversely affected by correlated noise such as non-uniform illumination of the neutron source, incorrect sampling of the coded radiograph, misalignment of the coded masks, mask transparency, and the imperfection of the system Point Spread Function (PSF). We argue that a model-based reconstruction algorithm can overcome these problems and describe the implementation of a Simultaneous Iterative Reconstruction Technique algorithm for coded sources. Design pitfalls that preclude a satisfactory reconstruction are documented.
An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
An adaptive source-channel coding with feedback for progressive transmission of medical images.
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
Fractal analysis of Xylella fastidiosa biofilm formation
NASA Astrophysics Data System (ADS)
Moreau, A. L. D.; Lorite, G. S.; Rodrigues, C. M.; Souza, A. A.; Cotta, M. A.
2009-07-01
We have investigated the growth process of Xylella fastidiosa biofilms inoculated on a glass. The size and the distance between biofilms were analyzed by optical images; a fractal analysis was carried out using scaling concepts and atomic force microscopy images. We observed that different biofilms show similar fractal characteristics, although morphological variations can be identified for different biofilm stages. Two types of structural patterns are suggested from the observed fractal dimensions Df. In the initial and final stages of biofilm formation, Df is 2.73±0.06 and 2.68±0.06, respectively, while in the maturation stage, Df=2.57±0.08. These values suggest that the biofilm growth can be understood as an Eden model in the former case, while diffusion-limited aggregation (DLA) seems to dominate the maturation stage. Changes in the correlation length parallel to the surface were also observed; these results were correlated with the biofilm matrix formation, which can hinder nutrient diffusion and thus create conditions to drive DLA growth.
Using locality-constrained linear coding in automatic target detection of HRS images
NASA Astrophysics Data System (ADS)
Rezaee, M.; Mirikharaji, Z.; Zhang, Y.
2016-04-01
Automatic target detection with complicated shapes in high spatial resolution images is an ongoing challenge in remote sensing image processing. This is because most methods use spectral or texture information, which are not sufficient for detecting complex shapes. In this paper, a new detection framework, based on Spatial Pyramid Matching (SPM) and Locality- constraint Linear Coding (LLC), is proposed to solve this problem, and exemplified using airplane shapes. The process starts with partitioning the image into sub-regions and generating a unique histogram for local features of each sub-region. Then, linear Support Vector Machines (SVMs) are used to detect objects based on a pyramid-matching kernel, which analyses the descriptors inside patches in different resolution. In order to generate the histogram, first a point feature detector (e.g. SIFT) is applied on the patches, and then a quantization process is used to select local features. In this step, the k-mean method is used in conjunction with the locality-constrained linear coding method. The LLC forces the coefficient matrix in the quantization process to be local and sparse as well. As a result, the speed of the method improves around 24 times in comparison to using sparse coding for quantization. Quantitative analysis also shows improvement in comparison to just using k-mean, but the accuracy in comparison to using sparse coding is similar. Rotation and shift of the desired object has no effect on the obtained results. The speed and accuracy of this algorithm for high spatial resolution images make it capable for use in real-world applications.
Coded aperture imaging with self-supporting uniformly redundant arrays. [Patent application
Fenimore, E.E.
1980-09-26
A self-supporting uniformly redundant array pattern for coded aperture imaging. The invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput.
LineCast: line-based distributed coding and transmission for broadcasting satellite images.
Wu, Feng; Peng, Xiulian; Xu, Jizheng
2014-03-01
In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction. PMID:24474371
NASA Astrophysics Data System (ADS)
Gross, Kevin A.; Kubala, Kenny
2007-04-01
In a traditional optical system the imaging performance is maximized at a single point in the operational space. This characteristic leads to maximizing the probability of detection if the object is on axis, at the designed conjugate, with the designed operational temperature and if the system components are manufactured without error in form and alignment. Due to the many factors that influence the system's image quality the probability of detection will decrease away from this peak value. An infrared imaging system is presented that statistically creates a higher probability of detection over the complete operational space for the Hotelling observer. The system is enabled through the use of wavefront coding, a computational imaging technology in which optics, mechanics, detection and signal processing are combined to enable LWIR imaging systems to be realized with detection task performance that is difficult or impossible to obtain in the optical domain alone. The basic principles of statistical decision theory will be presented along with a specific example of how wavefront coding technology can enable improved performance and reduced sensitivity to some of the fundamental constraints inherent in LWIR systems.
JPEG image transmission over mobile network with an efficient channel coding and interleaving
NASA Astrophysics Data System (ADS)
El-Bendary, Mohsen A. M. Mohamed; Abou El-Azm, Atef E.; El-Fishawy, Nawal A.; Al-Hosarey, Farid Shawki M.; Eltokhy, Mostafa A. R.; Abd El-Samie, Fathi E.; Kazemian, H. B.
2012-11-01
This article studies improving of coloured JPEG image transmission over mobile wireless personal area network through the Bluetooth networks. This article uses many types of enhanced data rate and asynchronous connectionless packets. It presents a proposed chaotic interleaving technique for improving a transmission of coloured images over burst error environment through merging it with error control scheme. The computational complexity of the used different error control schemes is considered. A comparison study between different scenarios of the image transmission is held in to choose an effective technique. The simulation experiments are carried over the correlated fading channel using the widely accepted Jakes' model. Our experiments reveal that the proposed chaotic interleaving technique enhances quality of the received coloured image. Our simulation results show that the convolutional codes with longer constraint length are effective if its complexity is ignored. It reveals also that the standard error control scheme of old Bluetooth versions is ineffective in the case of coloured image transmission over mobile Bluetooth network. Finally, the proposed scenarios of the standard error control scheme with the chaotic interleaver perform better than the convolutional codes with reducing the complexity.
Fractal analysis of bone structure with applications to osteoporosis and microgravity effects
Acharya, R.S.; Swarnarkar, V.; Krishnamurthy, R.; Hausman, E.; LeBlanc, A.; Lin, C.; Shackelford, L.
1995-12-31
The authors characterize the trabecular structure with the aid of fractal dimension. The authors use Alternating Sequential filters to generate a nonlinear pyramid for fractal dimension computations. The authors do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of the scheme is the rudimentary definition of self similarity. This allows them the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, the authors have shown that the ASF methods outperform other existing methods for fractal dimension estimation. They have shown that the fractal dimension remains the same when computed with both the X-Ray images and the MRI images of the patella. They have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, the authors have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, they have shown that the subject`s prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2000-05-01
Human vision involves higher-level knowledge and top-bottom processes for resolving ambiguity and uncertainty in the real images. Even very advanced low-level image processing can not give any advantages without a highly effective knowledge-representation and reasoning system that is the solution of image understanding problem. Methods of image analysis and coding are directly based on the methods of knowledge representation and processing. Article suggests such models and mechanisms in form of Spatial Turing Machine that in place of symbols and tapes works with hierarchical networks represented dually as discrete and continuous structures. Such networks are able to perform both graph and diagrammatic operations being the basis of intelligence. Computational intelligence methods provide transformation of continuous image information into the discrete structures, making it available for analysis. Article shows that symbols naturally emerge in such networks, giving opportunity to use symbolic operations. Such framework naturally combines methods of machine learning, classification and analogy with induction, deduction and other methods of higher level reasoning. Based on these principles image understanding system provides more flexible ways of handling with ambiguity and uncertainty in the real images and does not require supercomputers. That opens way to new technologies in the computer vision and image databases.
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen
2016-09-01
In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.
Fractal Segmentation and Clustering Analysis for Seismic Time Slices
NASA Astrophysics Data System (ADS)
Ronquillo, G.; Oleschko, K.; Korvin, G.; Arizabalo, R. D.
2002-05-01
Fractal analysis has become part of the standard approach for quantifying texture on gray-tone or colored images. In this research we introduce a multi-stage fractal procedure to segment, classify and measure the clustering patterns on seismic time slices from a 3-D seismic survey. Five fractal classifiers (c1)-(c5) were designed to yield standardized, unbiased and precise measures of the clustering of seismic signals. The classifiers were tested on seismic time slices from the AKAL field, Cantarell Oil Complex, Mexico. The generalized lacunarity (c1), fractal signature (c2), heterogeneity (c3), rugosity of boundaries (c4) and continuity resp. tortuosity (c5) of the clusters are shown to be efficient measures of the time-space variability of seismic signals. The Local Fractal Analysis (LFA) of time slices has proved to be a powerful edge detection filter to detect and enhance linear features, like faults or buried meandering rivers. The local fractal dimensions of the time slices were also compared with the self-affinity dimensions of the corresponding parts of porosity-logs. It is speculated that the spectral dimension of the negative-amplitude parts of the time-slice yields a measure of connectivity between the formation's high-porosity zones, and correlates with overall permeability.
SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors
Joshi, S; Kaye, W; Jaworski, J; He, Z
2015-06-15
Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinhole camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for
Electromagnetism on anisotropic fractal media
NASA Astrophysics Data System (ADS)
Ostoja-Starzewski, Martin
2013-04-01
Basic equations of electromagnetic fields in anisotropic fractal media are obtained using a dimensional regularization approach. First, a formulation based on product measures is shown to satisfy the four basic identities of the vector calculus. This allows a generalization of the Green-Gauss and Stokes theorems as well as the charge conservation equation on anisotropic fractals. Then, pursuing the conceptual approach, we derive the Faraday and Ampère laws for such fractal media, which, along with two auxiliary null-divergence conditions, effectively give the modified Maxwell equations. Proceeding on a separate track, we employ a variational principle for electromagnetic fields, appropriately adapted to fractal media, so as to independently derive the same forms of these two laws. It is next found that the parabolic (for a conducting medium) and the hyperbolic (for a dielectric medium) equations involve modified gradient operators, while the Poynting vector has the same form as in the non-fractal case. Finally, Maxwell's electromagnetic stress tensor is reformulated for fractal systems. In all the cases, the derived equations for fractal media depend explicitly on fractal dimensions in three different directions and reduce to conventional forms for continuous media with Euclidean geometries upon setting these each of dimensions equal to unity.
NASA Astrophysics Data System (ADS)
Brothers, Harlan J.
2015-03-01
Benoit Mandelbrot always had a strong feeling that music could be viewed from a fractal perspective. However, without our eyes to guide us, how do we gain this perspective? Here we discuss precisely what it means to say that a piece of music is fractal.
Relationship between Fractal Dimension and Agreeability of Facial Imagery
NASA Astrophysics Data System (ADS)
Oyama-Higa, Mayumi; Miao, Tiejun; Ito, Tasuo
2007-11-01
Why do people feel happy and good or equivalently empathize more, with smiling face imageries than with ones of expressionless face? To understand what the essential factors are underlying imageries in relating to the feelings, we conducted an experiment by 84 subjects asked to estimate the degree of agreeability about expressionless and smiling facial images taken from 23 young persons to whom the subjects were no any pre-acquired knowledge. Images were presented one at a time to each subject who was asked to rank agreeability on a scale from 1 to 10. Fractal dimensions of facial images were obtained in order to characterize the complexity of the imageries by using of two types of fractal analysis methods, i.e., planar and cubic analysis methods, respectively. The results show a significant difference in the fractal dimension values between expressionless faces and smiling ones. Furthermore, we found a well correlation between the degree of agreeability and fractal dimensions, implying that the fractal dimension optically obtained in relation to complexity in imagery information is useful to characterize the psychological processes of cognition and awareness.
Tong, Tong; Wolz, Robin; Coupé, Pierrick; Hajnal, Joseph V; Rueckert, Daniel
2013-08-01
We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labeling approaches that rely on comparing image similarities between atlases and target images. In addition, we propose a Fixed Discriminative Dictionary Learning for Segmentation (F-DDLS) strategy, which can learn dictionaries offline and perform segmentations online, enabling a significant speed-up in the segmentation stage. The proposed method has been evaluated for the hippocampus segmentation of 80 healthy ICBM subjects and 202 ADNI images. The robustness of the proposed method, especially of our F-DDLS strategy, was validated by training and testing on different subject groups in the ADNI database. The influence of different parameters was studied and the performance of the proposed method was also compared with that of the nonlocal patch-based approach. The proposed method achieved a median Dice coefficient of 0.879 on 202 ADNI images and 0.890 on 80 ICBM subjects, which is competitive compared with state-of-the-art methods. PMID:23523774
NASA Astrophysics Data System (ADS)
Sung, Hsin-Yueh; Chen, Po-Chang; Chang, Chuan-Chung; Chang, Chir-Weei; Yang, Sidney S.; Chang, Horng
2011-01-01
This paper presents a mobile phone imaging module with extended depth of focus (EDoF) by using axial irradiance equalization (AIE) phase coding. From radiation energy transfer along optical axis with constant irradiance, the focal depth enhancement solution is acquired. We introduce the axial irradiance equalization phase coding to design a two-element 2-megapixel mobile phone lens for trade off focus-like aberrations such as field curvature, astigmatism and longitudinal chromatic defocus. The design results produce modulation transfer functions (MTF) and phase transfer functions (PTF) with substantially similar characteristics at different field and defocus positions within Nyquist pass band. Besides, the measurement results are shown. Simultaneously, the design results and measurement results are compared. Next, for the EDoF mobile phone camera imaging system, we present a digital decoding design method and calculate a minimum mean square error (MMSE) filter. Then, the filter is applied to correct the substantially similar blur image. Last, the blur and de-blur images are demonstrated.
Embedded morphological dilation coding for 2D and 3D images
NASA Astrophysics Data System (ADS)
Lazzaroni, Fabio; Signoroni, Alberto; Leonardi, Riccardo
2002-01-01
Current wavelet-based image coders obtain high performance thanks to the identification and the exploitation of the statistical properties of natural images in the transformed domain. Zerotree-based algorithms, as Embedded Zerotree Wavelets (EZW) and Set Partitioning In Hierarchical Trees (SPIHT), offer high Rate-Distortion (RD) coding performance and low computational complexity by exploiting statistical dependencies among insignificant coefficients on hierarchical subband structures. Another possible approach tries to predict the clusters of significant coefficients by means of some form of morphological dilation. An example of a morphology-based coder is the Significance-Linked Connected Component Analysis (SLCCA) that has shown performance which are comparable to the zerotree-based coders but is not embedded. A new embedded bit-plane coder is proposed here based on morphological dilation of significant coefficients and context based arithmetic coding. The algorithm is able to exploit both intra-band and inter-band statistical dependencies among wavelet significant coefficients. Moreover, the same approach is used both for two and three-dimensional wavelet-based image compression. Finally we the algorithms are tested on some 2D images and on a medical volume, by comparing the RD results to those obtained with the state-of-the-art wavelet-based coders.
Application of fractal dimensions to study the structure of flocs formed in lime softening process.
Vahedi, Arman; Gorczyca, Beata
2011-01-01
The use of fractal dimensions to study the internal structure and settling of flocs formed in lime softening process was investigated. Fractal dimensions of flocs were measured directly on floc images and indirectly from their settling velocity. An optical microscope with a motorized stage was used to measure the fractal dimensions of lime softening flocs directly on their images in 2 and 3D space. The directly determined fractal dimensions of the lime softening flocs were 1.11-1.25 for floc boundary, 1.82-1.99 for cross-sectional area and 2.6-2.99 for floc volume. The fractal dimension determined indirectly from the flocs settling rates was 1.87 that was different from the 3D fractal dimension determined directly on floc images. This discrepancy is due to the following incorrect assumptions used for fractal dimensions determined from floc settling rates: linear relationship between square settling velocity and floc size (Stokes' Law), Euclidean relationship between floc size and volume, constant fractal dimensions and one primary particle size describing entire population of flocs. Floc settling model incorporating variable floc fractal dimensions as well as variable primary particle size was found to describe the settling velocity of large (>50 μm) lime softening flocs better than Stokes' Law. Settling velocities of smaller flocs (<50 μm) could still be quite well predicted by Stokes' Law. The variation of fractal dimensions with lime floc size in this study indicated that two mechanisms are involved in the formation of these flocs: cluster-cluster aggregation for small flocs (<50 μm) and diffusion-limited aggregation for large flocs (>50 μm). Therefore, the relationship between the floc fractal dimension and floc size appears to be determined by floc formation mechanisms. PMID:20937512
Biconjugate gradient stabilized method in image deconvolution of a wavefront coding system
NASA Astrophysics Data System (ADS)
Liu, Peng; Liu, Qin-xiao; Zhao, Ting-yu; Chen, Yan-ping; Yu, Fei-hong
2013-04-01
The point spread function (PSF) is a non-rotational symmetric for the wavefront coding (WFC) system with a cubic phase mask (CPM). Antireflective boundary conditions (BCs) are used to eliminate the ringing effect on the border and vibration on the edge of the image. The Kronecker product approximation is used to reduce the computation consumption. The image-formation process of the WFC system is transformed into a matrix equation. In order to save storage space, biconjugate gradient (Bi-CG) and biconjugate gradient stabilized (Bi-CGSTAB) methods are used to solve the asymmetric matrix equation, which is a typical iteration algorithm of the Krylov subspace using the two-side Lanczos process. Simulation and experimental results illustrate the efficiency of the proposed algorithm for the image deconvolution. The result based on the Bi-CGSTAB method is smoother than the classic Wiener filter, while preserving more details than the Truncated Singular Value Decomposition (TSVD) method.
The laser linewidth effect on the image quality of phase coded synthetic aperture ladar
NASA Astrophysics Data System (ADS)
Cai, Guangyu; Hou, Peipei; Ma, Xiaoping; Sun, Jianfeng; Zhang, Ning; Li, Guangyuan; Zhang, Guo; Liu, Liren
2015-12-01
The phase coded (PC) waveform in synthetic aperture ladar (SAL) outperforms linear frequency modulated (LFM) signal in lower side lobe, shorter pulse duration and making the rigid control of the chirp starting point in every pulse unnecessary. Inherited from radar PC waveform and strip map SAL, the backscattered signal of a point target in PC SAL was listed and the two dimensional match filtering algorithm was introduced to focus a point image. As an inherent property of laser, linewidth is always detrimental to coherent ladar imaging. With the widely adopted laser linewidth model, the effect of laser linewidth on SAL image quality was theoretically analyzed and examined via Monte Carlo simulation. The research gives us a clear view of how to select linewidth parameters in the future PC SAL systems.
Photoplus: auxiliary information for printed images based on distributed source coding
NASA Astrophysics Data System (ADS)
Samadani, Ramin; Mukherjee, Debargha
2008-01-01
A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.
Object-oriented image coding scheme based on DWT and Markov random field
NASA Astrophysics Data System (ADS)
Zheng, Lei; Wu, Hsien-Hsun S.; Liu, Jyh-Charn S.; Chan, Andrew K.
1998-12-01
In this paper, we introduce an object-oriented image coding algorithm to differentiate regions of interest (ROI) in visual communications. Our scheme is motivated by the fact that in visual communications, image contents (objects) are not equally important. For a given network bandwidth budget, one should give the highest transmission priority to the most interesting object, and serve the remaining ones at lower priorities. We propose a DWT based Multiresolution Markov Random Field technique to segment image objects according to their textures. We show that this technique can effectively distinguish visual objects and assign them different priorities. This scheme can be integrated with our ROI compression coder, the Generalized Self-Similarity Tress codex, for networking applications.
Fractal projection pursuit classification model applied to geochemical survey data
NASA Astrophysics Data System (ADS)
Xiao, Fan; Chen, Jianguo
2012-08-01
A new hybrid exploratory data analysis method, fractal projection pursuit classification (FPPC) model, is developed on the basis of the projection pursuit (PP) and fractal models. In this model, objective classification results are obtained by applying the projection index on the basis of the number-size fractal model. The real-coded acceleration genetic algorithm (RAGA) is used to optimize the projection index to establish the optimum projection direction in the model. Stream sedimentary geochemical data, Gejiu Mining District, Yunnan Province, China, were chosen in a case study to demonstrate the processing data analysis using FPPC. The results show that the anomalies are associated with known mineral deposits in the eastern part of the Gejiu District, and correlated with faults and granite in the western part of the study area. It is demonstrated that FPPC can be a powerful tool for multi-factor classification analysis and provide an effective approach to identify anomalies for mineral exploration.
Lashkari, Bahman; Zhang, Kaicheng; Mandelis, Andreas
2016-06-01
Mismatched coded excitation (CE) can be employed to increase the frame rate of synthetic aperture ultrasound imaging. The high autocorrelation and low cross correlation (CC) of transmitted signals enables the identification and separation of signal sources at the receiver. Thus, the method provides B-mode imaging with simultaneous transmission from several elements and capability of spatial decoding of the transmitted signals, which makes the imaging process equivalent to consecutive transmissions. Each transmission generates its own image and the combination of all the images results in an image with a high lateral resolution. In this paper, we introduce two different methods for generating multiple mismatched CEs with an identical frequency bandwidth and code length. Therefore, the proposed families of mismatched CEs are able to generate similar resolutions and signal-to-noise ratios. The application of these methods is demonstrated experimentally. Furthermore, several techniques are suggested that can be used to reduce the CC between the mismatched codes. PMID:27101603
NASA Astrophysics Data System (ADS)
Cai, Jianfei; Chen, Chang W.
2000-04-01
In this paper, we proposed a fixed-length robust joint source- channel coding (JSCC) scheme for image transmission over noisy channels. Three channel models are studied: binary symmetric channels (BSC) and additive white Gaussian noise (AWGN) channels for memoryless channels, and Gilbert-Elliott channels (GEC) for bursty channels. We derive, in this research, an explicit operational rate-distortion (R-D) function, which represents an end-to-end error measurement that includes errors due to both quantization and channel noise. In particular, we are able to incorporate the channel transition probability and channel bit error rate into the R-D function in the case of bursty channels. With the operational R-D function, bits are allocated not only among different subsources, but also between source coding and channel coding so that, under a fixed transmission rate, an optimum tradeoff between source coding accuracy and channel error protection can be achieved. This JSCC scheme is also integrated with allpass filtering source shaping to further improve the robustness against channel errors. Experimental results show that the proposed scheme can achieve not only high PSNR performance, but also excellent perceptual quality. Compared with the state-of-the-art JSCC schemes, this proposed scheme outperforms most of them especially when the channel mismatch occurs.
Seenivasagam, V; Velumani, R
2013-01-01
Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks. PMID:23970943
A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud
Seenivasagam, V.; Velumani, R.
2013-01-01
Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)—Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks. PMID:23970943
NASA Astrophysics Data System (ADS)
Doyle, Michael D.; Rabin, Harold; Suri, Jasjit S.
1991-04-01
Atechnique is proposed which uses fractal analysis for the non- traumatic and non-invasive quantification of trabecular bone density in the mandible using standard dental radiographs. Binary images of trabecular bone patterns are derived from digitized radiographic images. Fractal analysis is then used to calculate the Hausdorif dimension (D) of the binary image patterns. Variations in D calculated with this method can be correlated with known cases of systemic osteoporosis to establish normal and abnormal ranges for the value of D.
Self-organized one-atom thick fractal nanoclusters via field-induced atomic transport
NASA Astrophysics Data System (ADS)
Batabyal, R.; Mahato, J. C.; Das, Debolina; Roy, Anupam; Dev, B. N.
2013-08-01
We report on the growth of a monolayer thick fractal nanostructures of Ag on flat-top Ag islands, grown on Si(111). Upon application of a voltage pulse at an edge of the flat-top Ag island from a scanning tunneling microscope tip, Ag atoms climb from the edge onto the top of the island. These atoms aggregate to form precisely one-atom thick nanostructures of fractal nature. The fractal (Hausdorff) dimension, DH = 1.75 ± 0.05, of this nanostructure has been determined by analyzing the morphology of the growing nanocluster, imaged by scanning tunneling microscopy, following the application of the voltage pulse. This value of the fractal dimension is consistent with the diffusion limited aggregation (DLA) model. We also determined two other fractal dimensions based on perimeter-radius-of-gyration (DP) and perimeter-area (D'P) relationship. Simulations of the DLA process, with varying sticking probability, lead to different cluster morphologies [P. Meakin, Phys. Rev. A 27, 1495 (1983)]; however, the value of DH is insensitive to this difference in morphology. We suggest that the morphology can be characterized by additional fractal dimension(s) DP and/or D'P, besides DH. We also show that within the DLA process DP = DH [C. Amitrano et al., Phys. Rev. A 40, 1713 (1989)] is only a special case; in general, DP and DH can be unequal. Characterization of fractal morphology is important for fractals in nanoelectronics, as fractal morphology would determine the electron transport behavior.
Design and application of quick computation program on fractal dimension of land-use types
NASA Astrophysics Data System (ADS)
Mei, Xin; Wang, Quanfang; Wang, Qian; Liu, Junyi
2009-07-01
Now the fractal dimension of Land Use Types is often calculated by using raster data as the raw data, but quite a number of spatial data is stored as vector data in fact. If these data are converted to images to calculate fractal dimension, perhaps some pixels with inaccurate grey value will result from the "GRID" structure of raster data. And the precision of fractal dimension calculated on raster Data is closely related to the size of pixel and Grid image.In view of this, In this paper, a computation program of the fractal dimension for 2D vector data based on Windows platform has been designed by using Visual Csharp. This program has been successfully applied to land-use data of the middle Qinling Mountains and the southeast of Hubei Province in China.in the 1990s. The results show that the program is a convenient, reliable and precise method of fractal dimension for 2D Vector Data.
Digital camcorder image stabilizer based on gray-coded bit-plane block matching
NASA Astrophysics Data System (ADS)
Yeh, YeouMin; Wang, Sheng-Jyh; Chiang, Hwang-Cheng
2000-06-01
In this paper, we proposed an efficient algorithm to do image stabilization for digital camcorder. This approach is based on gray-coded bit-plane block matching to eliminate the unpleasing effect caused by involuntary movement of camera holders. To improve moving object detection and stabilization performance, a frame is divided into several blocks to do localized motion estimation. Based on our architecture, the temporal correlation is used at the motion unit to efficiently detect moving objects and panning conditions. To compensate for camera rotation, an energy minimization is also applied to calculate the coefficients of affine transform without many complicated computations. Having considered both programming flexibility and hardware efficiency, the motion decision and motion compensation units are coded in a microprocessor that interconnects with the stabilization hardware. The proposed stabilizer is now implemented on FPGA 10K 100.
Inverted fractal analysis of TiOx thin layers grown by inverse pulsed laser deposition
NASA Astrophysics Data System (ADS)
Égerházi, L.; Smausz, T.; Bari, F.
2013-08-01
Inverted fractal analysis (IFA), a method developed for fractal analysis of scanning electron microscopy images of cauliflower-like thin films is presented through the example of layers grown by inverse pulsed laser deposition (IPLD). IFA uses the integrated fractal analysis module (FracLac) of the image processing software ImageJ, and an objective thresholding routine that preserves the characteristic features of the images, independently of their brightness and contrast. IFA revealed fD = 1.83 ± 0.01 for TiOx layers grown at 5-50 Pa background pressures. For a series of images, this result was verified by evaluating the scaling of the number of still resolved features on the film, counted manually. The value of fD not only confirms the fractal structure of TiOx IPLD thin films, but also suggests that the aggregation of plasma species in the gas atmosphere may have only limited contribution to the deposition.
Fractal texture analysis of the healing process after bone loss.
Borowska, Marta; Szarmach, Janusz; Oczeretko, Edward
2015-12-01
Radiological assessment of treatment effectiveness of guided bone regeneration (GBR) method in postresectal and postcystal bone loss cases, observed for one year. Group of 25 patients (17 females and 8 males) who underwent root resection with cystectomy were evaluated. The following combination therapy of intraosseous deficits was used, consisting of bone augmentation with xenogenic material together with covering regenerative membranes and tight wound closure. The bone regeneration process was estimated, comparing the images taken on the day of the surgery and 12 months later, by means of Kodak RVG 6100 digital radiography set. The interpretation of the radiovisiographic image depends on the evaluation ability of the eye looking at it, which leaves a large margin of uncertainty. So, several texture analysis techniques were developed and used sequentially on the radiographic image. For each method, the results were the mean from the 25 images. These methods compute the fractal dimension (D), each one having its own theoretic basis. We used five techniques for calculating fractal dimension: power spectral density method, triangular prism surface area method, blanket method, intensity difference scaling method and variogram analysis. Our study showed a decrease of fractal dimension during the healing process after bone loss. We also found evidence that various methods of calculating fractal dimension give different results. During the healing process after bone loss, the surfaces of radiographic images became smooth. The result obtained show that our findings may be of great importance for diagnostic purpose. PMID:26362075
Performance of a Fieldable Large-Area, Coded-Aperture, Gamma Imager
Habte Ghebretatios, Frezghi; Cunningham, Mark F; Fabris, Lorenzo; Ziock, Klaus-Peter
2007-01-01
We recently developed a fieldable large-area, coded-aperture, gamma imager (the Large Area Imager - LAI). The instrument was developed to detect weak radiation sources in a fluctuating natural background. Ideally, the efficacy of the instrument is determined using receiver-operator statistics generated from measurement data in terms of probability of detection versus probability of false alarm. However, due to the impracticality of hiding many sources in public areas, it is difficult to measure the data required to generate receiver-operator characteristic (ROC) curves. Instead, we develop a high statistics "model source" from measurements of a real point source and then inject the model source into data collected from the world at large where, presumably, no source exists. In this paper we have applied this "source injection" technique to evaluate the performance of the LAI. We plotted ROC curves obtained for different source locations from the imager and for different source strengths when the source is injected at 50 m from the imager. The result shows that this prototype instrument provides excellent performance for a 1-mCi source at a distance of 50 m from the imager in a single pass at 25 mph.
Trabecular Bone Mechanical Properties and Fractal Dimension
NASA Technical Reports Server (NTRS)
Hogan, Harry A.
1996-01-01
Countermeasures for reducing bone loss and muscle atrophy due to extended exposure to the microgravity environment of space are continuing to be developed and improved. An important component of this effort is finite element modeling of the lower extremity and spinal column. These models will permit analysis and evaluation specific to each individual and thereby provide more efficient and effective exercise protocols. Inflight countermeasures and post-flight rehabilitation can then be customized and targeted on a case-by-case basis. Recent Summer Faculty Fellowship participants have focused upon finite element mesh generation, muscle force estimation, and fractal calculations of trabecular bone microstructure. Methods have been developed for generating the three-dimensional geometry of the femur from serial section magnetic resonance images (MRI). The use of MRI as an imaging modality avoids excessive exposure to radiation associated with X-ray based methods. These images can also detect trabecular bone microstructure and architecture. The goal of the current research is to determine the degree to which the fractal dimension of trabecular architecture can be used to predict the mechanical properties of trabecular bone tissue. The elastic modulus and the ultimate strength (or strain) can then be estimated from non-invasive, non-radiating imaging and incorporated into the finite element models to more accurately represent the bone tissue of each individual of interest. Trabecular bone specimens from the proximal tibia are being studied in this first phase of the work. Detailed protocols and procedures have been developed for carrying test specimens through all of the steps of a multi-faceted test program. The test program begins with MRI and X-ray imaging of the whole bones before excising a smaller workpiece from the proximal tibia region. High resolution MRI scans are then made and the piece further cut into slabs (roughly 1 cm thick). The slabs are X-rayed again
Exterior dimension of fat fractals
NASA Technical Reports Server (NTRS)
Grebogi, C.; Mcdonald, S. W.; Ott, E.; Yorke, J. A.
1985-01-01
Geometric scaling properties of fat fractal sets (fractals with finite volume) are discussed and characterized via the introduction of a new dimension-like quantity which is called the exterior dimension. In addition, it is shown that the exterior dimension is related to the 'uncertainty exponent' previously used in studies of fractal basin boundaries, and it is shown how this connection can be exploited to determine the exterior dimension. Three illustrative applications are described, two in nonlinear dynamics and one dealing with blood flow in the body. Possible relevance to porous materials and ballistic driven aggregation is also noted.
Visible–infrared achromatic imaging by wavefront coding with wide-angle automobile camera
NASA Astrophysics Data System (ADS)
Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu
2016-09-01
We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.
Gamma ray imaging using coded aperture masks: a computer simulation approach.
Jimenez, J; Olmos, P; Pablos, J L; Perez, J M
1991-02-10
The gamma-ray imaging using coded aperture masks as focusing elements is an extended technique for static position sensitive detectors. Several transfer functions have been proposed to implement mathematically the set of holes in the mask, the uniformly redundant array collimator being the most popular design. A considerable amount of work has been done to improve the digital methods to deconvolve the gamma-ray image, formed at the detector plane, with this transfer function. Here we present a study of the behavior of these techniques when applied to the geometric shadows produced by a set of point emitters. Comparison of the shape of the object reconstructed from these shadows with that resulting from the analytical reconstruction is performed, defining the validity ranges of the usual algorithmic approximations reported in the literature. Finally, several improvements are discussed. PMID:20582025
Modeling of imaging diagnostics for laser plasma interaction experiments with the code PARAX
NASA Astrophysics Data System (ADS)
Lewis, K.; Riazuelo, G.; Labaune, C.
2005-09-01
We have developed a diagnostic simulation tool for the code PARAX to interpret recent measurements of far-field images of the laser light transmitted through a preformed plasma. This includes the complete treatment of the propagation of the light coming from a well-defined region of plasma through the rest of the plasma and all the optics of the imaging system. We have modeled the whole light path, as well as the spatio-temporal integration of the instruments, and the limited collecting aperture for the light emerging out of the plasma. The convolution of computed magnitudes with the plasma and diagnostics transfer functions is indispensable to enable the comparison between experiments and simulations. This tool is essential in the study of the propagation of intense laser beams in plasma media.
Indexing the output points of an LBVQ used for image transform coding.
Khataie, M; Soleymani, M R
2000-01-01
A new method for indexing the points of a lattice-based vector quantizer (LBVQ) used to quantize the DCT coefficients of images is presented. With this method, a large number of lattice points can be selected as codewords. As a result, the quality of the compressed data can be very high. The problem is that the large number of points results in a high bit rate. To reduce the bit rate, a shorter representation is assigned to the more frequently used lattice points. These points are grouped and a prefix code is used to index these lattice points. Our method outperforms JPEG, particularly, in the case of images with high frequency components. PMID:18255454
Fractal analysis: A new remote sensing tool for lava flows
NASA Technical Reports Server (NTRS)
Bruno, B. C.; Taylor, G. J.; Rowland, S. K.; Lucey, P. G.; Self, S.
1992-01-01
Many important quantitative parameters have been developed that relate to the rheology and eruption and emplacement mechanics of lavas. This research centers on developing additional, unique parameters, namely the fractal properties of lava flows, to add to this matrix of properties. There are several methods of calculating the fractal dimension of a lava flow margin. We use the 'structured walk' or 'divider' method. In this method, we measure the length of a given lava flow margin by walking rods of different lengths along the margin. Since smaller rod lengths transverse more smaller-scaled features in the flow margin, the apparent length of the flow outline will increase as the length of the measuring rod decreases. By plotting the apparent length of the flow outline as a function of the length of the measuring rod on a log-log plot, fractal behavior can be determined. A linear trend on a log-log plot indicates that the data are fractal. The fractal dimension can then be calculated from the slope of the linear least squares fit line to the data. We use this 'structured walk' method to calculate the fractal dimension of many lava flows using a wide range of rod lengths, from 1/8 to 16 meters, in field studies of the Hawaiian islands. We also use this method to calculate fractal dimensions from aerial photographs of lava flows, using lengths ranging from 20 meters to over 2 kilometers. Finally, we applied this method to orbital images of extraterrestrial lava flows on Venus, Mars, and the Moon, using rod lengths up to 60 kilometers.
Map of fluid flow in fractal porous medium into fractal continuum flow.
Balankin, Alexander S; Elizarraraz, Benjamin Espinoza
2012-05-01
This paper is devoted to fractal continuum hydrodynamics and its application to model fluid flows in fractally permeable reservoirs. Hydrodynamics of fractal continuum flow is developed on the basis of a self-consistent model of fractal continuum employing vector local fractional differential operators allied with the Hausdorff derivative. The generalized forms of Green-Gauss and Kelvin-Stokes theorems for fractional calculus are proved. The Hausdorff material derivative is defined and the form of Reynolds transport theorem for fractal continuum flow is obtained. The fundamental conservation laws for a fractal continuum flow are established. The Stokes law and the analog of Darcy's law for fractal continuum flow are suggested. The pressure-transient equation accounting the fractal metric of fractal continuum flow is derived. The generalization of the pressure-transient equation accounting the fractal topology of fractal continuum flow is proposed. The mapping of fluid flow in a fractally permeable medium into a fractal continuum flow is discussed. It is stated that the spectral dimension of the fractal continuum flow d(s) is equal to its mass fractal dimension D, even when the spectral dimension of the fractally porous or fissured medium is less than D. A comparison of the fractal continuum flow approach with other models of fluid flow in fractally permeable media and the experimental field data for reservoir tests are provided. PMID:23004869
Map of fluid flow in fractal porous medium into fractal continuum flow
NASA Astrophysics Data System (ADS)
Balankin, Alexander S.; Elizarraraz, Benjamin Espinoza
2012-05-01
This paper is devoted to fractal continuum hydrodynamics and its application to model fluid flows in fractally permeable reservoirs. Hydrodynamics of fractal continuum flow is developed on the basis of a self-consistent model of fractal continuum employing vector local fractional differential operators allied with the Hausdorff derivative. The generalized forms of Green-Gauss and Kelvin-Stokes theorems for fractional calculus are proved. The Hausdorff material derivative is defined and the form of Reynolds transport theorem for fractal continuum flow is obtained. The fundamental conservation laws for a fractal continuum flow are established. The Stokes law and the analog of Darcy's law for fractal continuum flow are suggested. The pressure-transient equation accounting the fractal metric of fractal continuum flow is derived. The generalization of the pressure-transient equation accounting the fractal topology of fractal continuum flow is proposed. The mapping of fluid flow in a fractally permeable medium into a fractal continuum flow is discussed. It is stated that the spectral dimension of the fractal continuum flow ds is equal to its mass fractal dimension D, even when the spectral dimension of the fractally porous or fissured medium is less than D. A comparison of the fractal continuum flow approach with other models of fluid flow in fractally permeable media and the experimental field data for reservoir tests are provided.
Fractality of light's darkness.
O'Holleran, Kevin; Dennis, Mark R; Flossmann, Florian; Padgett, Miles J
2008-02-01
Natural light fields are threaded by lines of darkness. For monochromatic light, the phenomenon is familiar in laser speckle, i.e., the black points that appear in the scattered light. These black points are optical vortices that extend as lines throughout the volume of the field. We establish by numerical simulations, supported by experiments, that these vortex lines have the fractal properties of a Brownian random walk. Approximately 73% of the lines percolate through the optical beam, the remainder forming closed loops. Our statistical results are similar to those of vortices in random discrete lattice models of cosmic strings, implying that the statistics of singularities in random optical fields exhibit universal behavior. PMID:18352372
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-09-01
The Central Limit Theorem (CLT) and Extreme Value Theory (EVT) study, respectively, the stochastic limit-laws of sums and maxima of sequences of independent and identically distributed (i.i.d.) random variables via an affine scaling scheme. In this research we study the stochastic limit-laws of populations of i.i.d. random variables via nonlinear scaling schemes. The stochastic population-limits obtained are fractal Poisson processes which are statistically self-similar with respect to the scaling scheme applied, and which are characterized by two elemental structures: (i) a universal power-law structure common to all limits, and independent of the scaling scheme applied; (ii) a specific structure contingent on the scaling scheme applied. The sum-projection and the maximum-projection of the population-limits obtained are generalizations of the classic CLT and EVT results - extending them from affine to general nonlinear scaling schemes.
Polychronopoulos, Dimitris; Athanasopoulou, Labrini; Almirantis, Yannis
2016-06-15
Conserved non-coding elements (CNEs) are defined using various degrees of sequence identity and thresholds of minimal length. Their conservation frequently exceeds the one observed for protein-coding sequences. We explored the chromosomal distribution of different classes of CNEs in the human genome. We employed two methodologies: the scaling of block entropy and box-counting, with the aim to assess fractal characteristics of different CNE datasets. Both approaches converged to the conclusion that well-developed fractality is characteristic of elements that are either extremely conserved between species or are of ancient origin, i.e. conserved between distant organisms across evolution. Given that CNEs are often clustered around genes, we verified by appropriate gene masking that fractal-like patterns emerge even when elements found in proximity or inside genes are excluded. An evolutionary scenario is proposed, involving genomic events that might account for fractal distribution of CNEs in the human genome as indicated through numerical simulations. PMID:26899868
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution. PMID:8172973
Sparse spike coding : applications of neuroscience to the processing of natural images
NASA Astrophysics Data System (ADS)
Perrinet, Laurent U.
2008-04-01
If modern computers are sometimes superior to cognition in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing a relative or following an object in a complex background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the eciency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will particularly focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.
NASA Technical Reports Server (NTRS)
Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)
2001-01-01
The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.
Huang, H K; Lo, S C; Ho, B K; Lou, S L
1987-05-01
Some error-free and irreversible two-dimensional direct-cosine-transform (2D-DCT) coding, image-compression techniques applied to radiological images are discussed in this paper. Run-length coding and Huffman coding are described, and examples are given for error-free image compression. In the case of irreversible 2D-DCT coding, the block-quantization technique and the full-frame bit-allocation (FFBA) technique are described. Error-free image compression can achieve a compression ratio from 2:1 to 3:1, whereas the irreversible 2D-DCT coding compression technique can, in general, achieve a much higher acceptable compression ratio. The currently available block-quantization hardware may lead to visible block artifacts at certain compression ratios, but FFBA may be employed with the same or higher compression ratios without generating such artifacts. An even higher compression ratio can be achieved if the image is compressed by using first FFBA and then Huffman coding. The disadvantages of FFBA are that it is sensitive to sharp edges and no hardware is available. This paper also describes the design of the FFBA technique. PMID:3598750
Thermodynamics of Photons on Fractals
Akkermans, Eric; Dunne, Gerald V.; Teplyaev, Alexander
2010-12-03
A thermodynamical treatment of a massless scalar field (a photon) confined to a fractal spatial manifold leads to an equation of state relating pressure to internal energy, PV{sub s}=U/d{sub s}, where d{sub s} is the spectral dimension and V{sub s} defines the 'spectral volume'. For regular manifolds, V{sub s} coincides with the usual geometric spatial volume, but on a fractal this is not necessarily the case. This is further evidence that on a fractal, momentum space can have a different dimension than position space. Our analysis also provides a natural definition of the vacuum (Casimir) energy of a fractal. We suggest ways that these unusual properties might be probed experimentally.
Fractal analysis of Mesoamerican pyramids.
Burkle-Elizondo, Gerardo; Valdez-Cepeda, Ricardo David
2006-01-01
A myth of ancient cultural roots was integrated into Mesoamerican cult, and the reference to architecture denoted a depth religious symbolism. The pyramids form a functional part of this cosmovision that is centered on sacralization. The space architecture works was an expression of the ideological necessities into their conception of harmony. The symbolism of the temple structures seems to reflect the mathematical order of the Universe. We contemplate two models of fractal analysis. The first one includes 16 pyramids. We studied a data set that was treated as a fractal profile to estimate the Df through variography (Dv). The estimated Fractal Dimension Dv = 1.383 +/- 0.211. In the second one we studied a data set to estimate the Dv of 19 pyramids and the estimated Fractal Dimension Dv = 1.229 +/- 0.165. PMID:16393505
Anomalous Diffusion in Fractal Globules
NASA Astrophysics Data System (ADS)
Tamm, M. V.; Nazarov, L. I.; Gavrilov, A. A.; Chertovich, A. V.
2015-05-01
The fractal globule state is a popular model for describing chromatin packing in eukaryotic nuclei. Here we provide a scaling theory and dissipative particle dynamics computer simulation for the thermal motion of monomers in the fractal globule state. Simulations starting from different entanglement-free initial states show good convergence which provides evidence supporting the existence of a unique metastable fractal globule state. We show monomer motion in this state to be subdiffusive described by ⟨X2(t )⟩˜tαF with αF close to 0.4. This result is in good agreement with existing experimental data on the chromatin dynamics, which makes an additional argument in support of the fractal globule model of chromatin packing.
Fractal Character of Titania Nanoparticles Formed by Laser Ablation
Musaev, O.; Midgley, A; Wrobel, J; Yan, J; Kruger, M
2009-01-01
Titania nanoparticles were fabricated by laser ablation of polycrystalline rutile in water at room temperature. The resulting nanoparticles were analyzed with x-ray diffraction, Raman spectroscopy, and transmission electron microscopy. The electron micrograph image of deposited nanoparticles demonstrates fractal properties.
Magnetic Resonance Image Phantom Code System to Calibrate in vivo Measurement Systems.
1997-07-17
Version 00 MRIPP provides relative calibration factors for the in vivo measurement of internally deposited photon emitting radionuclides within the human body. The code includes a database of human anthropometric structures (phantoms) that were constructed from whole body Magnetic Resonance Images. The database contains a large variety of human images with varying anatomical structure. Correction factors are obtained using Monte Carlo transport of photons through the voxel geometry of the phantom. Correction factors provided bymore » MRIPP allow users of in vivo measurement systems (e.g., whole body counters) to calibrate these systems with simple sources and obtain subject specific calibrations. Note that the capability to format MRI data for use with this system is not included; therefore, one must use the phantom data included in this package. MRIPP provides a simple interface to perform Monte Carlo simulation of photon transport through the human body. MRIPP also provides anthropometric information (e.g., height, weight, etc.) for individuals used to generate the phantom database. A modified Voxel version of the Los Alamos National Laboratory MCNP4A code is used for the Monte Carlo simulation. The Voxel version Fortran patch to MCNP4 and MCNP4A (Monte Carlo N-Particle transport simulation) and the MCNP executable are included in this distribution, but the MCNP Fortran source is not included. It was distributed by RSICC as CCC-200 but is now obsoleted by the current release MCNP4B.« less
Cellular-resolution population imaging reveals robust sparse coding in the Drosophila Mushroom Body
Honegger, Kyle S.; Campbell, Robert A. A.; Turner, Glenn C.
2011-01-01
Sensory stimuli are represented in the brain by the activity of populations of neurons. In most biological systems, studying population coding is challenging since only a tiny proportion of cells can be recorded simultaneously. Here we used 2-photon imaging to record neural activity in the relatively simple Drosophila mushroom body (MB), an area involved in olfactory learning and memory. Using the highly sensitive calcium indicator, GCaMP3, we simultaneously monitored the activity of >100 MB neurons in vivo (about 5% of the total population). The MB is thought to encode odors in sparse patterns of activity, but the code has yet to be explored either on a population level or with a wide variety of stimuli. We therefore imaged responses to odors chosen to evaluate the robustness of sparse representations. Different odors activated distinct patterns of MB neurons, however we found no evidence for spatial organization of neurons by either response probability or odor tuning within the cell body layer. The degree of sparseness was consistent across a wide range of stimuli, from monomolecular odors to artificial blends and even complex natural smells. Sparseness was mainly invariant across concentrations, largely because of the influence of recent odor experience. Finally, in contrast to sensory processing in other systems, no response features distinguished natural stimuli from monomolecular odors. Our results indicate that the fundamental feature of odor processing in the MB is to create sparse stimulus representations in a format that facilitates arbitrary associations between odor and punishment or reward. PMID:21849538
Magnetic Resonance Image Phantom Code System to Calibrate in vivo Measurement Systems.
HICKMAN, DAVE
1997-07-17
Version 00 MRIPP provides relative calibration factors for the in vivo measurement of internally deposited photon emitting radionuclides within the human body. The code includes a database of human anthropometric structures (phantoms) that were constructed from whole body Magnetic Resonance Images. The database contains a large variety of human images with varying anatomical structure. Correction factors are obtained using Monte Carlo transport of photons through the voxel geometry of the phantom. Correction factors provided by MRIPP allow users of in vivo measurement systems (e.g., whole body counters) to calibrate these systems with simple sources and obtain subject specific calibrations. Note that the capability to format MRI data for use with this system is not included; therefore, one must use the phantom data included in this package. MRIPP provides a simple interface to perform Monte Carlo simulation of photon transport through the human body. MRIPP also provides anthropometric information (e.g., height, weight, etc.) for individuals used to generate the phantom database. A modified Voxel version of the Los Alamos National Laboratory MCNP4A code is used for the Monte Carlo simulation. The Voxel version Fortran patch to MCNP4 and MCNP4A (Monte Carlo N-Particle transport simulation) and the MCNP executable are included in this distribution, but the MCNP Fortran source is not included. It was distributed by RSICC as CCC-200 but is now obsoleted by the current release MCNP4B.
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Low-complexity digital filter geometry for spherical coded imaging systems
NASA Astrophysics Data System (ADS)
Feng, Guotong; Shoaib, Mohammed; Robinson, M. D.
2009-08-01
Recent research in the area of electro-optical system design identified the benefits of spherical aberration for extending the depth-of-field of electro-optical imaging systems. In such imaging systems, spherical aberration is deliberately introduced by the optical system lowering system modulation transfer function (MTF) and then subsequently corrected using digital processing. Previous research, however, requires complex digital postprocessing algorithms severely limiting its applicability to only expensive systems. In this paper, we examine the ability of low-cost spatially invariant finite impulse response (FIR) digital filters to restore system MTF degraded by spherical aberration. We introduce an analytical model for choosing the minimum, and hence cheapest, FIR filter size capable of providing the critical level sharpening to render artifact-free images. We identify a robust quality criterion based on the post-processed MTF for developing this model. We demonstrate the reliability of the estimated model by showing simulated spherical coded imaging results. We also evaluate the hardware complexity of the FIR filters implemented for various spherical aberrations on a low-end Field-Programmable Gate Array (FPGA) platform.
Real-time detection of natural objects using AM-coded spectral matching imager
NASA Astrophysics Data System (ADS)
Kimachi, Akira
2004-12-01
This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.
Real-time detection of natural objects using AM-coded spectral matching imager
NASA Astrophysics Data System (ADS)
Kimachi, Akira
2005-01-01
This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.
Fractal dynamics of bioconvective patterns
NASA Technical Reports Server (NTRS)
Noever, David A.
1991-01-01
Biologically generated cellular patterns, sometimes called bioconvective patterns, are found to cluster into aggregates which follow fractal growth dynamics akin to diffusion-limited aggregation (DLA) models. The pattern formed is self-similar with fractal dimension of 1.66 +/-0.038. Bioconvective DLA branching results from thermal roughening which shifts the balance between ordering viscous forces and disordering cell motility and random diffusion. The phase diagram for pattern morphology includes DLA, boundary spokes, random clusters, and reverse clusters.
Fractal electronic devices: simulation and implementation.
Fairbanks, M S; McCarthy, D N; Scott, S A; Brown, S A; Taylor, R P
2011-09-01
Many natural structures have fractal geometries that exhibit useful functional properties. These properties, which exploit the recurrence of patterns at increasingly small scales, are often desirable in applications and, consequently, fractal geometry is increasingly employed in diverse technologies ranging from radio antennae to storm barriers. In this paper, we explore the application of fractal geometry to electrical devices. First, we lay the foundations for the implementation of fractal devices by considering diffusion-limited aggregation (DLA) of atomic clusters. Under appropriate growth conditions, atomic clusters of various elements form fractal patterns driven by DLA. We perform a fractal analysis of both simulated and physical devices to determine their spatial scaling properties and demonstrate their potential as fractal circuit elements. Finally, we simulate conduction through idealized and DLA fractal devices and show that their fractal scaling properties generate novel, nonlinear conduction properties in response to depletion by electrostatic gates. PMID:21841218
Fractal electronic devices: simulation and implementation
NASA Astrophysics Data System (ADS)
Fairbanks, M. S.; McCarthy, D. N.; Scott, S. A.; Brown, S. A.; Taylor, R. P.
2011-09-01
Many natural structures have fractal geometries that exhibit useful functional properties. These properties, which exploit the recurrence of patterns at increasingly small scales, are often desirable in applications and, consequently, fractal geometry is increasingly employed in diverse technologies ranging from radio antennae to storm barriers. In this paper, we explore the application of fractal geometry to electrical devices. First, we lay the foundations for the implementation of fractal devices by considering diffusion-limited aggregation (DLA) of atomic clusters. Under appropriate growth conditions, atomic clusters of various elements form fractal patterns driven by DLA. We perform a fractal analysis of both simulated and physical devices to determine their spatial scaling properties and demonstrate their potential as fractal circuit elements. Finally, we simulate conduction through idealized and DLA fractal devices and show that their fractal scaling properties generate novel, nonlinear conduction properties in response to depletion by electrostatic gates.
Guo, Xiaohu; Dong, Liquan; Zhao, Yuejin; Jia, Wei; Kong, Lingqin; Wu, Yijian; Li, Bing
2015-04-01
Wavefront coding (WFC) technology is adopted in the space optical system to resolve the problem of defocus caused by temperature difference or vibration of satellite motion. According to the theory of WFC, we calculate and optimize the phase mask parameter of the cubic phase mask plate, which is used in an on-axis three-mirror Cassegrain (TMC) telescope system. The simulation analysis and the experimental results indicate that the defocused modulation transfer function curves and the corresponding blurred images have a perfect consistency in the range of 10 times the depth of focus (DOF) of the original TMC system. After digital image processing by a Wiener filter, the spatial resolution of the restored images is up to 57.14 line pairs/mm. The results demonstrate that the WFC technology in the TMC system has superior performance in extending the DOF and less sensitivity to defocus, which has great value in resolving the problem of defocus in the space optical system. PMID:25967192
Nonlinear spike-and-slab sparse coding for interpretable image encoding.
Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947
Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947
Fractal geometry-based classification approach for the recognition of lung cancer cells
NASA Astrophysics Data System (ADS)
Xia, Deshen; Gao, Wenqing; Li, Hua
1994-05-01
This paper describes a new fractal geometry based classification approach for the recognition of lung cancer cells, which is used in the health inspection for lung cancers, because cancer cells grow much faster and more irregularly than normal cells do, the shape of the segmented cancer cells is very irregular and considered as a graph without characteristic length. We use Texture Energy Intensity Rn to do fractal preprocessing to segment the cells from the image and to calculate the fractal dimention value for extracting the fractal features, so that we can get the figure characteristics of different cancer cells and normal cells respectively. Fractal geometry gives us a correct description of cancer-cell shapes. Through this method, a good recognition of Adenoma, Squamous, and small cancer cells can be obtained.