Science.gov

Sample records for fractal image coding

  1. Fractal image coding based on replaced domain pools

    NASA Astrophysics Data System (ADS)

    Harada, Masaki; Fujii, Toshiaki; Kimoto, Tadahiko; Tanimoto, Masayuki

    1998-01-01

    Fractal image coding based on iterated function system has been attracting much interest because of the possibilities of drastic data compression. It performs compression by using the self-similarity included in an image. In the conventional schemes, under the assumption of the self- similarity in the image, each block is mapped from that larger block which is considered as the most suitable block to approximate the range block. However, even if the exact self-similarity of an image is found at the encoder, it hardly holds at the decoder because a domain pool of the encoder is different from that of the decoder. In this paper, we prose a fractal image coding scheme by using domain pools replaced with decoded or transformed values to reduce the difference between the domain pools of the encoder and that of the decoder. The proposed scheme performs two-stage encoding. The domain pool is replaced with decoded non-contractive blocks first and then with transformed values for contractive blocks. It is expected that the proposed scheme reduces errors of contractive blocks in the reconstructed image while those of non- contractive blocks are kept unchanged. The experimental result show the effectiveness of the proposed scheme.

  2. Image fractal coding algorithm based on complex exponent moments and minimum variance

    NASA Astrophysics Data System (ADS)

    Yang, Feixia; Ping, Ziliang; Zhou, Suhua

    2017-02-01

    Image fractal coding possesses very high compression ratio, the main problem is low speed of coding. The algorithm based on Complex Exponent Moments(CEM) and minimum variance is proposed to speed up the fractal coding compression. The definition of CEM and its FFT algorithm are presented, and the multi-distorted invariance of CEM are discussed. The multi-distorted invariance of CEM is fit to the fractal property of an image. The optimal matching pair of range blocks and domain blocks in an image is determined by minimizing the variance of their CEM. Theory analysis and experimental results have proved that the algorithm can dramatically reduce the iteration time and speed up image encoding and decoding process.

  3. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  4. Fractal coding of wavelet image based on human vision contrast-masking effect

    NASA Astrophysics Data System (ADS)

    Wei, Hai; Shen, Lansun

    2000-06-01

    In this paper, a fractal-based compression approach of wavelet image is presented. The scheme tries to make full use of the sensitivity features of the human visual system. With the wavelet-based multi-resolution representation of image, detail vectors of each high frequency sub-image are constructed in accordance with its spatial orientation in order to grasp the edge information to which human observer is sensitive. Then a multi-level selection algorithm based on human vision's contrast masking effect is proposed to make the decision whether a detail vector is coded or not. Those vectors below the contrast threshold are discarded without introducing visual artifacts because of the ignorance of human vision. As for the redundancy of the retained vectors, different fractal- based methods are employed to decrease the correlation in single sub-image and between the different resolution sub- images with the same orientation. Experimental results suggest the efficiency of the proposed scheme. With the standard test image, our approach outperforms the EZW algorithm and the JPEG method.

  5. Improved method for predicting the peak signal-to-noise ratio quality of decoded images in fractal image coding

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Bi, Sheng

    2017-01-01

    To predict the peak signal-to-noise ratio (PSNR) quality of decoded images in fractal image coding more efficiently and accurately, an improved method is proposed. After some derivations and analyses, we find that the linear correlation coefficients between coded range blocks and their respective best-matched domain blocks can determine the dynamic range of their collage errors, which can also provide the minimum and the maximum of the accumulated collage error (ACE) of uncoded range blocks. Moreover, the dynamic range of the actual percentage of accumulated collage error (APACE), APACEmin to APACEmax, can be determined as well. When APACEmin reaches a large value, such as 90%, APACEmin to APACEmax will be limited in a small range and APACE can be computed approximately. Furthermore, with ACE and the approximate APACE, the ACE of all range blocks and the average collage error (ACER) can be obtained. Finally, with the logarithmic relationship between ACER and the PSNR quality of decoded images, the PSNR quality of decoded images can be predicted directly. Experiments show that compared with the previous similar method, the proposed method can predict the PSNR quality of decoded images more accurately and needs less computation time simultaneously.

  6. Region-based fractal video coding

    NASA Astrophysics Data System (ADS)

    Zhu, Shiping; Belloulata, Kamel

    2008-10-01

    A novel video sequence compression scheme is proposed in order to realize the efficient and economical transmission of video sequence, and also the region-based functionality of MPEG-4. The CPM and NCIM fractal coding scheme is applied on each region independently by a prior image segmentation map (alpha plane) which is exactly the same as defined in MPEG-4. The first n frames of video sequence are encoded as a "set" using the Circular Prediction Mapping (CPM) and encode the remaining frames using the Non Contractive Interframe Mapping (NCIM). The CPM and NCIM accomplish the motion estimation and compensation, which can exploit the high temporal correlations between the adjacent frames of video sequence. The experimental results with the monocular video sequences provide promising performances at low bit rate coding, such as the application in video conference. We believe the proposed fractal video codec will be a powerful and efficient technique for the region-based video sequence coding.

  7. Fractal images induce fractal pupil dilations and constrictions.

    PubMed

    Moon, P; Muday, J; Raynor, S; Schirillo, J; Boydston, C; Fairbanks, M S; Taylor, R P

    2014-09-01

    Fractals are self-similar structures or patterns that repeat at increasingly fine magnifications. Research has revealed fractal patterns in many natural and physiological processes. This article investigates pupillary size over time to determine if their oscillations demonstrate a fractal pattern. We predict that pupil size over time will fluctuate in a fractal manner and this may be due to either the fractal neuronal structure or fractal properties of the image viewed. We present evidence that low complexity fractal patterns underlie pupillary oscillations as subjects view spatial fractal patterns. We also present evidence implicating the autonomic nervous system's importance in these patterns. Using the variational method of the box-counting procedure we demonstrate that low complexity fractal patterns are found in changes within pupil size over time in millimeters (mm) and our data suggest that these pupillary oscillation patterns do not depend on the fractal properties of the image viewed. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. The Language of Fractals.

    ERIC Educational Resources Information Center

    Jurgens, Hartmut; And Others

    1990-01-01

    The production and application of images based on fractal geometry are described. Discussed are fractal language groups, fractal image coding, and fractal dialects. Implications for these applications of geometry to mathematics education are suggested. (CW)

  9. The Language of Fractals.

    ERIC Educational Resources Information Center

    Jurgens, Hartmut; And Others

    1990-01-01

    The production and application of images based on fractal geometry are described. Discussed are fractal language groups, fractal image coding, and fractal dialects. Implications for these applications of geometry to mathematics education are suggested. (CW)

  10. Suboptimal fractal coding scheme using iterative transformation

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Soo; Chung, Jae-won

    2001-05-01

    This paper presents a new fractal coding scheme to find a suboptimal transformation by performing an iterative encoding process. The optimal transformation can be defined as the transformation generating the closest attractor to an original image. Unfortunately, it is impossible in practice to find the optimal transformation, due to the heavy computational burden. In this paper, however, by means of some new theorems related with contractive transformations and attractors. It is shown that for some specific cases the optimal or suboptimal transformations can be obtained. The proposed method obtains a suboptimal transformation by performing iterative processes as is done in decoding. Thus, it requires more computation than the conventional method, but it improves the image quality. For a simple case where the optimal transformation can actually be found, the proposed method is experimentally evaluated against both the optimal method and the conventional method. For a general case where the optimal transformation in unavailable due to heavy computational complexity, the proposed method is also evaluated in comparison with the conventional method.

  11. A Novel Fractal Coding Method Based on M-J Sets

    PubMed Central

    Sun, Yuanyuan; Xu, Rudan; Chen, Lina; Kong, Ruiqing; Hu, Xiaopeng

    2014-01-01

    In this paper, we present a novel fractal coding method with the block classification scheme based on a shared domain block pool. In our method, the domain block pool is called dictionary and is constructed from fractal Julia sets. The image is encoded by searching the best matching domain block with the same BTC (Block Truncation Coding) value in the dictionary. The experimental results show that the scheme is competent both in encoding speed and in reconstruction quality. Particularly for large images, the proposed method can avoid excessive growth of the computational complexity compared with the traditional fractal coding algorithm. PMID:25010686

  12. Study on Huber fractal image compression.

    PubMed

    Jeng, Jyh-Horng; Tseng, Chun-Chieh; Hsieh, Jer-Guang

    2009-05-01

    In this paper, a new similarity measure for fractal image compression (FIC) is introduced. In the proposed Huber fractal image compression (HFIC), the linear Huber regression technique from robust statistics is embedded into the encoding procedure of the fractal image compression. When the original image is corrupted by noises, we argue that the fractal image compression scheme should be insensitive to those noises presented in the corrupted image. This leads to a new concept of robust fractal image compression. The proposed HFIC is one of our attempts toward the design of robust fractal image compression. The main disadvantage of HFIC is the high computational cost. To overcome this drawback, particle swarm optimization (PSO) technique is utilized to reduce the searching time. Simulation results show that the proposed HFIC is robust against outliers in the image. Also, the PSO method can effectively reduce the encoding time while retaining the quality of the retrieved image.

  13. Fractal-based image edge detection

    NASA Astrophysics Data System (ADS)

    Luo, Huiguo; Zhu, Yaoting; Zhu, Guang-Xi; Wan, Faguang; Zhang, Ping

    1993-08-01

    Image edge is an important feature of image. Usually, we use Laplacian or Sober operator to get an image edge. In this paper, we use fractal method to get the edge. After introducing Fractal Brownian Random (FBR) field, we give the definition of Discrete Fractal Brownian Increase Random (DFBIR) field and discuss its properties, then we apply the DFBIR field to detect the edge of an image. According to the parameters H and D of DFBIR, we give a measure M equals (alpha) H + (beta) D. From the M value of each pixel, we can detect the edge of image.

  14. Fractal Image Informatics: from SEM to DEM

    NASA Astrophysics Data System (ADS)

    Oleschko, K.; Parrot, J.-F.; Korvin, G.; Esteves, M.; Vauclin, M.; Torres-Argüelles, V.; Salado, C. Gaona; Cherkasov, S.

    2008-05-01

    In this paper, we introduce a new branch of Fractal Geometry: Fractal Image Informatics, devoted to the systematic and standardized fractal analysis of images of natural systems. The methods of this discipline are based on the properties of multiscale images of selfaffine fractal surfaces. As proved in the paper, the image inherits the scaling and lacunarity of the surface and of its reflectance distribution [Korvin, 2005]. We claim that the fractal analysis of these images must be done without any smoothing, thresholding or binarization. Two new tools of Fractal Image Informatics, firmagram analysis (FA) and generalized lacunarity (GL), are presented and discussed in details. These techniques are applicable to any kind of image or to any observed positive-valued physical field, and can be used to correlate between images. It will be shown, by a modified Grassberger-Hentschel-Procaccia approach [Phys. Lett. 97A, 227 (1983); Physica 8D, 435 (1983)] that GL obeys the same scaling law as the Allain-Cloitre lacunarity [Phys. Rev. A 44, 3552 (1991)] but is free of the problems associated with gliding boxes. Several applications are shown from Soil Physics, Surface Science, and other fields.

  15. Exotic topological order from quantum fractal code

    NASA Astrophysics Data System (ADS)

    Yoshida, Beni

    2014-03-01

    We present a large class of three-dimensional spin models that possess topological order with stability against local perturbations, but are beyond description of topological quantum field theory. Conventional topological spin liquids, on a formal level, may be viewed as condensation of string-like extended objects with discrete gauge symmetries, being at fixed points with continuous scale symmetries. In contrast, ground states of fractal spin liquids are condensation of highly-fluctuating fractal objects with certain algebraic symmetries, corresponding to limit cycles under real-space renormalization group transformations which naturally arise from discrete scale symmetries of underlying fractal geometries. A particular class of three-dimensional models proposed in this paper may potentially saturate quantum information storage capacity for local spin systems.

  16. Pre-Service Teachers' Concept Images on Fractal Dimension

    ERIC Educational Resources Information Center

    Karakus, Fatih

    2016-01-01

    The analysis of pre-service teachers' concept images can provide information about their mental schema of fractal dimension. There is limited research on students' understanding of fractal and fractal dimension. Therefore, this study aimed to investigate the pre-service teachers' understandings of fractal dimension based on concept image. The…

  17. Pre-Service Teachers' Concept Images on Fractal Dimension

    ERIC Educational Resources Information Center

    Karakus, Fatih

    2016-01-01

    The analysis of pre-service teachers' concept images can provide information about their mental schema of fractal dimension. There is limited research on students' understanding of fractal and fractal dimension. Therefore, this study aimed to investigate the pre-service teachers' understandings of fractal dimension based on concept image. The…

  18. Fractal Movies.

    ERIC Educational Resources Information Center

    Osler, Thomas J.

    1999-01-01

    Because fractal images are by nature very complex, it can be inspiring and instructive to create the code in the classroom and watch the fractal image evolve as the user slowly changes some important parameter or zooms in and out of the image. Uses programming language that permits the user to store and retrieve a graphics image as a disk file.…

  19. Fractal Movies.

    ERIC Educational Resources Information Center

    Osler, Thomas J.

    1999-01-01

    Because fractal images are by nature very complex, it can be inspiring and instructive to create the code in the classroom and watch the fractal image evolve as the user slowly changes some important parameter or zooms in and out of the image. Uses programming language that permits the user to store and retrieve a graphics image as a disk file.…

  20. MRI Image Processing Based on Fractal Analysis

    PubMed

    Marusina, Mariya Y; Mochalina, Alexandra P; Frolova, Ekaterina P; Satikov, Valentin I; Barchuk, Anton A; Kuznetcov, Vladimir I; Gaidukov, Vadim S; Tarakanov, Segrey A

    2017-01-01

    Background: Cancer is one of the most common causes of human mortality, with about 14 million new cases and 8.2 million deaths reported in in 2012. Early diagnosis of cancer through screening allows interventions to reduce mortality. Fractal analysis of medical images may be useful for this purpose. Materials and Methods: In this study, we examined magnetic resonance (MR) images of healthy livers and livers containing metastases from colorectal cancer. The fractal dimension and the Hurst exponent were chosen as diagnostic features for tomographic imaging using Image J software package for image processings FracLac for applied for fractal analysis with a 120x150 pixel area. Calculations of the fractal dimensions of pathological and healthy tissue samples were performed using the box-counting method. Results: In pathological cases (foci formation), the Hurst exponent was less than 0.5 (the region of unstable statistical characteristics). For healthy tissue, the Hurst index is greater than 0.5 (the zone of stable characteristics). Conclusions: The study indicated the possibility of employing fractal rapid analysis for the detection of focal lesions of the liver. The Hurst exponent can be used as an important diagnostic characteristic for analysis of medical images.

  1. Image Segmentation via Fractal Dimension

    DTIC Science & Technology

    1987-12-01

    statistical expectation K = a proportionality constant H = the Hurst exponent , in interval [0,1] (14:249) Eq (4) is a mathematical generalization of...ease, negatively correlated (24:16). The Hurst exponent is directly related to the fractal diment.ion of the process being modelled by the relation (24...24) DzE.I -H (5) where D = the fractal dimension E m the Euclidean dimension H = the Hurst exponent The effect of N1 on a typical trace can be seen

  2. Multispectral image fusion based on fractal features

    NASA Astrophysics Data System (ADS)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the

  3. Pyramidal fractal dimension for high resolution images.

    PubMed

    Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2016-07-01

    Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images.

  4. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  5. Fresnel diffraction of fractal grating and self-imaging effect.

    PubMed

    Wang, Junhong; Zhang, Wei; Cui, Yuwei; Teng, Shuyun

    2014-04-01

    Based on the self-similarity property of fractal, two types of fractal gratings are produced according to the production and addition operations of multiple periodic gratings. Fresnel diffractions of fractal grating are analyzed theoretically, and the general mathematic expressions of the diffraction intensity distributions of fractal grating are deduced. The gray-scale patterns of the 2D diffraction distributions of fractal grating are provided through numerical calculations. The diffraction patterns take on the periodicity along the longitude and transverse directions. The 1D diffraction distribution at some certain distances shows the same structure as the fractal grating. This indicates that the self-image of fractal grating is really formed in the Fresnel diffraction region. The experimental measurement of the diffraction intensity distribution of fractal grating with different fractal dimensions and different fractal levels is performed, and the self-images of fractal grating are obtained successfully in experiments. The conclusions of this paper are helpful for the development of the application of fractal grating.

  6. Fractal image compression: A resolution independent representation for imagery

    NASA Technical Reports Server (NTRS)

    Sloan, Alan D.

    1993-01-01

    A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.

  7. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  8. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  9. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  10. Fractal analysis: fractal dimension and lacunarity from MR images for differentiating the grades of glioma.

    PubMed

    Smitha, K A; Gupta, A K; Jayasree, R S

    2015-09-07

    Glioma, the heterogeneous tumors originating from glial cells, generally exhibit varied grades and are difficult to differentiate using conventional MR imaging techniques. When this differentiation is crucial in the disease prognosis and treatment, even the advanced MR imaging techniques fail to provide a higher discriminative power for the differentiation of malignant tumor from benign ones. A powerful image processing technique applied to the imaging techniques is expected to provide a better differentiation. The present study focuses on the fractal analysis of fluid attenuation inversion recovery MR images, for the differentiation of glioma. For this, we have considered the most important parameters of fractal analysis, fractal dimension and lacunarity. While fractal analysis assesses the malignancy and complexity of a fractal object, lacunarity gives an indication on the empty space and the degree of inhomogeneity in the fractal objects. Box counting method with the preprocessing steps namely binarization, dilation and outlining was used to obtain the fractal dimension and lacunarity in glioma. Statistical analysis such as one-way analysis of variance and receiver operating characteristic (ROC) curve analysis helped to compare the mean and to find discriminative sensitivity of the results. It was found that the lacunarity of low and high grade gliomas vary significantly. ROC curve analysis between low and high grade glioma for fractal dimension and lacunarity yielded 70.3% sensitivity and 66.7% specificity and 70.3% sensitivity and 88.9% specificity, respectively. The study observes that fractal dimension and lacunarity increases with an increase in the grade of glioma and lacunarity is helpful in identifying most malignant grades.

  11. Blind Detection of Region Duplication Forgery Using Fractal Coding and Feature Matching.

    PubMed

    Jenadeleh, Mohsen; Ebrahimi Moghaddam, Mohsen

    2016-05-01

    Digital image forgery detection is important because of its wide use in applications such as medical diagnosis, legal investigations, and entertainment. Copy-move forgery is one of the famous techniques, which is used in region duplication. Many of the existing copy-move detection algorithms cannot effectively blind detect duplicated regions that are made by powerful image manipulation software like Photoshop. In this study, a new method is proposed for blind detecting manipulations in digital images based on modified fractal coding and feature vector matching. The proposed method not only detects typical copy-move forgery, but also finds multiple copied forgery regions for images that are subjected to rotation, scaling, reflection, and a mixture of these postprocessing operations. The proposed method is robust against tampered images undergoing attacks such as Gaussian blurring, contrast scaling, and brightness adjustment. The experimental results demonstrated the validity and efficiency of the method.

  12. Imaging through diffusive layers using speckle pattern fractal analysis and application to embedded object detection in tissues

    NASA Astrophysics Data System (ADS)

    Tremberger, George, Jr.; Flamholz, A.; Cheung, E.; Sullivan, R.; Subramaniam, R.; Schneider, P.; Brathwaite, G.; Boteju, J.; Marchese, P.; Lieberman, D.; Cheung, T.; Holden, Todd

    2007-09-01

    The absorption effect of the back surface boundary of a diffuse layer was studied via laser generated reflection speckle pattern. The spatial speckle intensity provided by a laser beam was measured. The speckle data were analyzed in terms of fractal dimension (computed by NIH ImageJ software via the box counting fractal method) and weak localization theory based on Mie scattering. Bar code imaging was modeled as binary absorption contrast and scanning resolution in millimeter range was achieved for diffusive layers up to thirty transport mean free path thick. Samples included alumina, porous glass and chicken tissue. Computer simulation was used to study the effect of speckle spatial distribution and observed fractal dimension differences were ascribed to variance controlled speckle sizes. Fractal dimension suppressions were observed in samples that had thickness dimensions around ten transport mean free path. Computer simulation suggested a maximum fractal dimension of about 2 and that subtracting information could lower fractal dimension. The fractal dimension was shown to be sensitive to sample thickness up to about fifteen transport mean free paths, and embedded objects which modified 20% or more of the effective thickness was shown to be detectable. The box counting fractal method was supplemented with the Higuchi data series fractal method and application to architectural distortion mammograms was demonstrated. The use of fractals in diffusive analysis would provide a simple language for a dialog between optics experts and mammography radiologists, facilitating the applications of laser diagnostics in tissues.

  13. Fractal dimension of cerebral surfaces using magnetic resonance images

    SciTech Connect

    Majumdar, S.; Prasad, R.R.

    1988-11-01

    The calculation of the fractal dimension of the surface bounded by the grey matter in the normal human brain using axial, sagittal, and coronal cross-sectional magnetic resonance (MR) images is presented. The fractal dimension in this case is a measure of the convolutedness of this cerebral surface. It is proposed that the fractal dimension, a feature that may be extracted from MR images, may potentially be used for image analysis, quantitative tissue characterization, and as a feature to monitor and identify cerebral abnormalities and developmental changes.

  14. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.

    1998-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and

  15. Laser image denoising technique based on multi-fractal theory

    NASA Astrophysics Data System (ADS)

    Du, Lin; Sun, Huayan; Tian, Weiqing; Wang, Shuai

    2014-02-01

    The noise of laser images is complex, which includes additive noise and multiplicative noise. Considering the features of laser images, the basic processing capacity and defects of the common algorithm, this paper introduces the fractal theory into the research of laser image denoising. The research of laser image denoising is implemented mainly through the analysis of the singularity exponent of each pixel in fractal space and the feature of multi-fractal spectrum. According to the quantitative and qualitative evaluation of the processed image, the laser image processing technique based on fractal theory not only effectively removes the complicated noise of the laser images obtained by range-gated laser active imaging system, but can also maintains the detail information when implementing the image denoising processing. For different laser images, multi-fractal denoising technique can increase SNR of the laser image at least 1~2dB compared with other denoising techniques, which basically meet the needs of the laser image denoising technique.

  16. Embedded foveation image coding.

    PubMed

    Wang, Z; Bovik, A C

    2001-01-01

    The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.

  17. Fractal digital image synthesis and application in security holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hanqiang; Zhu, Guang-Xi; Zhu, Yaoting; Zhang, Zhaoqun; Cao, Yulin; Ge, Hongwei; Li, Xuan

    2000-10-01

    Conventional method to generate a view angle combining rainbow hologram used a camera to record the sequence of pictures of an object. The object could be a model or a real object. It cost time and money. In recent years, computer-generated holograms have been investigated intensively because of their wide application range and their advantages in term of flexibility, accuracy, light weight and cost. In this paper, a new kind of fractal digital hologram is introduced. The fractal digital holograms are produced with the hyper-complex number model, the fractal hyper-texture model and the self-similar image model based on multi-scale Hurst parameters. The fractal digital holograms have been used in security holograms. In a security hologram, the more complex the model is, the more security the hologram has. The results indicate that fractal digital holograms have a good prospect for application in security holograms.

  18. Evaluation of Two Fractal Methods for Magnetogram Image Analysis

    NASA Technical Reports Server (NTRS)

    Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.

    1997-01-01

    Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.

  19. Evaluation of Two Fractal Methods for Magnetogram Image Analysis

    NASA Technical Reports Server (NTRS)

    Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.

    1997-01-01

    Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.

  20. Fractal-feature distance analysis of contrast-detail phantom image and meaning of pseudo fractal dimension and complexity.

    PubMed

    Imai, K; Ikeda, M; Enchi, Y; Niimi, T

    2009-12-01

    The purposes of our studies are to examine whether or not fractal-feature distance deduced from virtual volume method can simulate observer performance indices and to investigate the physical meaning of pseudo fractal dimension and complexity. Contrast-detail (C-D) phantom radiographs were obtained at various mAs values (0.5 - 4.0 mAs) and 140 kVp with a computed radiography system, and the reference image was acquired at 13 mAs. For all C-D images, fractal analysis was conducted using the virtual volume method that was devised with a fractional Brownian motion model. The fractal-feature distances between the considered and reference images were calculated using pseudo fractal dimension and complexity. Further, we have performed the C-D analysis in which ten radiologists participated, and compared the fractal-feature distances with the image quality figures (IQF). To clarify the physical meaning of the pseudo fractal dimension and complexity, contrast-to-noise ratio (CNR) and standard deviation (SD) of images noise were calculated for each mAs and compared with the pseudo fractal dimension and complexity, respectively. A strong linear correlation was found between the fractal-feature distance and IQF. The pseudo fractal dimensions became large as CNR increased. Further, a linear correlation was found between the exponential complexity and image noise SD.

  1. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  2. Person identification using fractal analysis of retina images

    NASA Astrophysics Data System (ADS)

    Ungureanu, Constantin; Corniencu, Felicia

    2004-10-01

    Biometric is automated method of recognizing a person based on physiological or behavior characteristics. Among the features measured are retina scan, voice, and fingerprint. A retina-based biometric involves the analysis of the blood vessels situated at the back of the eye. In this paper we present a method, which uses the fractal analysis to characterize the retina images. The Fractal Dimension (FD) of retina vessels was measured for a number of 20 images and have been obtained different values of FD for each image. This algorithm provides a good accuracy is cheap and easy to implement.

  3. Confocal coded aperture imaging

    DOEpatents

    Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.

    2001-01-01

    A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.

  4. Coded source neutron imaging

    SciTech Connect

    Bingham, Philip R; Santos-Villalobos, Hector J

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  5. Coded source neutron imaging

    NASA Astrophysics Data System (ADS)

    Bingham, Philip; Santos-Villalobos, Hector; Tobin, Ken

    2011-03-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole diameters show resolutions matching the hole diameters.

  6. Fractal-based image texture analysis of trabecular bone architecture.

    PubMed

    Jiang, C; Pitt, R E; Bertram, J E; Aneshansley, D J

    1999-07-01

    Fractal-based image analysis methods are investigated to extract textural features related to the anisotropic structure of trabecular bone from the X-ray images of cubic bone specimens. Three methods are used to quantify image textural features: power spectrum, Minkowski dimension and mean intercept length. The global fractal dimension is used to describe the overall roughness of the image texture. The anisotropic features formed by the trabeculae are characterised by a fabric ellipse, whose orientation and eccentricity reflect the textural anisotropy of the image. Tests of these methods with synthetic images of known fractal dimension show that the Minkowski dimension provides a more accurate and consistent estimation of global fractal dimension. Tests on bone x-ray (eccentricity range 0.25-0.80) images indicate that the Minkowski dimension is more sensitive to the changes in textural orientation. The results suggest that the Minkowski dimension is a better measure for characterising trabecular bone anisotropy in the x-ray images of thick specimens.

  7. Image coding by block prediction of multiresolution subimages.

    PubMed

    Rinaldo, R; Calvagno, G

    1995-01-01

    The redundancy of the multiresolution representation has been clearly demonstrated in the case of fractal images, but it has not been fully recognized and exploited for general images. Fractal block coders have exploited the self-similarity among blocks in images. We devise an image coder in which the causal similarity among blocks of different subbands in a multiresolution decomposition of the image is exploited. In a pyramid subband decomposition, the image is decomposed into a set of subbands that are localized in scale, orientation, and space. The proposed coding scheme consists of predicting blocks in one subimage from blocks in lower resolution subbands with the same orientation. Although our prediction maps are of the same kind of those used in fractal block coders, which are based on an iterative mapping scheme, our coding technique does not impose any contractivity constraint on the block maps. This makes the decoding procedure very simple and allows a direct evaluation of the mean squared error (MSE) between the original and the reconstructed image at coding time. More importantly, we show that the subband pyramid acts as an automatic block classifier, thus making the block search simpler and the block matching more effective. These advantages are confirmed by the experimental results, which show that the performance of our scheme is superior for both visual quality and MSE to that obtainable with standard fractal block coders and also to that of other popular image coders such as JPEG.

  8. Trabecular architecture analysis in femur radiographic images using fractals.

    PubMed

    Udhayakumar, G; Sujatha, C M; Ramakrishnan, S

    2013-04-01

    Trabecular bone is a highly complex anisotropic material that exhibits varying magnitudes of strength in compression and tension. Analysis of the trabecular architectural alteration that manifest as loss of trabecular plates and connection has been shown to yield better estimation of bone strength. In this work, an attempt has been made toward the development of an automated system for investigation of trabecular femur bone architecture using fractal analysis. Conventional radiographic femur bone images recorded using standard protocols are used in this study. The compressive and tensile regions in the images are delineated using preprocessing procedures. The delineated images are analyzed using Higuchi's fractal method to quantify pattern heterogeneity and anisotropy of trabecular bone structure. The results show that the extracted fractal features are distinct for compressive and tensile regions of normal and abnormal human femur bone. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  9. Liver ultrasound image classification by using fractal dimension of edge

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita

    2012-08-01

    Medical ultrasound image edge detection is an important component in increasing the number of application of segmentation, and hence it has been subject of many studies in the literature. In this study, we have classified the liver ultrasound images (US) combining Canny and Sobel edge detectors with fractal analysis in order to provide an indicator about of the US images roughness. We intend to provide a classification rule of the focal liver lesions as: cirrhotic liver, liver hemangioma and healthy liver. For edges detection the Canny and Sobel operators were used. Fractal analyses have been applied for texture analysis and classification of focal liver lesions according to fractal dimension (FD) determined by using the Box Counting method. To assess the performance and accuracy rate of the proposed method the contrast-to-noise (CNR) is analyzed.

  10. Fractal image perception provides novel insights into hierarchical cognition.

    PubMed

    Martins, M J; Fischmeister, F P; Puig-Waldmüller, E; Oh, J; Geissler, A; Robinson, S; Fitch, W T; Beisteiner, R

    2014-08-01

    Hierarchical structures play a central role in many aspects of human cognition, prominently including both language and music. In this study we addressed hierarchy in the visual domain, using a novel paradigm based on fractal images. Fractals are self-similar patterns generated by repeating the same simple rule at multiple hierarchical levels. Our hypothesis was that the brain uses different resources for processing hierarchies depending on whether it applies a "fractal" or a "non-fractal" cognitive strategy. We analyzed the neural circuits activated by these complex hierarchical patterns in an event-related fMRI study of 40 healthy subjects. Brain activation was compared across three different tasks: a similarity task, and two hierarchical tasks in which subjects were asked to recognize the repetition of a rule operating transformations either within an existing hierarchical level, or generating new hierarchical levels. Similar hierarchical images were generated by both rules and target images were identical. We found that when processing visual hierarchies, engagement in both hierarchical tasks activated the visual dorsal stream (occipito-parietal cortex, intraparietal sulcus and dorsolateral prefrontal cortex). In addition, the level-generating task specifically activated circuits related to the integration of spatial and categorical information, and with the integration of items in contexts (posterior cingulate cortex, retrosplenial cortex, and medial, ventral and anterior regions of temporal cortex). These findings provide interesting new clues about the cognitive mechanisms involved in the generation of new hierarchical levels as required for fractals. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Wideband fractal antennas for holographic imaging and rectenna applications

    NASA Astrophysics Data System (ADS)

    Bunch, Kyle J.; McMakin, Douglas L.; Sheen, David M.

    2008-04-01

    At Pacific Northwest National Laboratory, wideband antenna arrays have been successfully used to reconstruct three-dimensional images at microwave and millimeter-wave frequencies. Applications of this technology have included portal monitoring, through-wall imaging, and weapons detection. Fractal antennas have been shown to have wideband characteristics due to their self-similar nature (that is, their geometry is replicated at different scales). They further have advantages in providing good characteristics in a compact configuration. We discuss the application of fractal antennas for holographic imaging. Simulation results will be presented. Rectennas are a specific class of antennas in which a received signal drives a nonlinear junction and is retransmitted at either a harmonic frequency or a demodulated frequency. Applications include tagging and tracking objects with a uniquely-responding antenna. It is of interest to consider fractal rectenna because the self-similarity of fractal antennas tends to make them have similar resonance behavior at multiples of the primary resonance. Thus, fractal antennas can be suited for applications in which a signal is reradiated at a harmonic frequency. Simulations will be discussed with this application in mind.

  12. Wideband Fractal Antennas for Holographic Imaging and Rectenna Applications

    SciTech Connect

    Bunch, Kyle J.; McMakin, Douglas L.; Sheen, David M.

    2008-04-18

    At Pacific Northwest National Laboratory, wideband antenna arrays have been successfully used to reconstruct three-dimensional images at microwave and millimeter-wave frequencies. Applications of this technology have included portal monitoring, through-wall imaging, and weapons detection. Fractal antennas have been shown to have wideband characteristics due to their self-similar nature (that is, their geometry is replicated at different scales). They further have advantages in providing good characteristics in a compact configuration. We discuss the application of fractal antennas for holographic imaging. Simulation results will be presented. Rectennas are a specific class of antennas in which a received signal drives a nonlinear junction and is retransmitted at either a harmonic frequency or a demodulated frequency. Applications include tagging and tracking objects with a uniquely-responding antenna. It is of interest to consider fractal rectenna because the self-similarity of fractal antennas tends to make them have similar resonance behavior at multiples of the primary resonance. Thus, fractal antennas can be suited for applications in which a signal is reradiated at a harmonic frequency. Simulations will be discussed with this application in mind.

  13. Fractal Image Filters for Specialized Image Recognition Tasks

    DTIC Science & Technology

    2010-02-11

    The Fractal Geometry of Nature, [24], Mandelbrot argues that random frac- tals provide geometrical models for naturally occurring shapes and forms...Fractal Properties of Number Systems, Period. Math. Hungar 42 (2001) 51-68. [24] Benoit Mandelbrot , The Fractal Geometry of Nature, W. H. Freeman, San

  14. Fractal analysis for reduced reference image quality assessment.

    PubMed

    Xu, Yong; Liu, Delei; Quan, Yuhui; Le Callet, Patrick

    2015-07-01

    In this paper, multifractal analysis is adapted to reduced-reference image quality assessment (RR-IQA). A novel RR-QA approach is proposed, which measures the difference of spatial arrangement between the reference image and the distorted image in terms of spatial regularity measured by fractal dimension. An image is first expressed in Log-Gabor domain. Then, fractal dimensions are computed on each Log-Gabor subband and concatenated as a feature vector. Finally, the extracted features are pooled as the quality score of the distorted image using l1 distance. Compared with existing approaches, the proposed method measures image quality from the perspective of the spatial distribution of image patterns. The proposed method was evaluated on seven public benchmark data sets. Experimental results have demonstrated the excellent performance of the proposed method in comparison with state-of-the-art approaches.

  15. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Quattrochi, Dale A.; Luvall, Jeffrey C.

    1997-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely scale-independent. Self-similarity is a property of curves or surfaces where each part is indistinguishable from the whole. The fractal dimension D of remote sensing data yields quantitative insight on the spatial complexity and information content contained within these data. Analyses of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed(l0 to 80 meters). The forested scene behaves as one would expect-larger pixel sizes decrease the complexity of the image as individual clumps of trees are averaged into larger blocks. The increased complexity of the agricultural image with increasing pixel size results from the loss of homogeneous groups of pixels in the large fields to mixed pixels composed of varying combinations of NDVI values that correspond to roads and vegetation. The same process occur's in the urban image to some extent, but the lack of large, homogeneous areas in the high resolution NDVI image means the initially high D value is maintained as pixel size increases. The slope of the fractal dimension-resolution relationship provides indications of how image classification or feature identification will be affected by changes in sensor resolution.

  16. Fractal descriptors for discrimination of microscopy images of plant leaves

    NASA Astrophysics Data System (ADS)

    Silva, N. R.; Florindo, J. B.; Gómez, M. C.; Kolb, R. M.; Bruno, O. M.

    2014-03-01

    This study proposes the application of fractal descriptors method to the discrimination of microscopy images of plant leaves. Fractal descriptors have demonstrated to be a powerful discriminative method in image analysis, mainly for the discrimination of natural objects. In fact, these descriptors express the spatial arrangement of pixels inside the texture under different scales and such arrangements are directly related to physical properties inherent to the material depicted in the image. Here, we employ the Bouligand-Minkowski descriptors. These are obtained by the dilation of a surface mapping the gray-level texture. The classification of the microscopy images is performed by the well-known Support Vector Machine (SVM) method and we compare the success rate with other literature texture analysis methods. The proposed method achieved a correctness rate of 89%, while the second best solution, the Co-occurrence descriptors, yielded only 78%. This clear advantage of fractal descriptors demonstrates the potential of such approach in the analysis of the plant microscopy images.

  17. Characterization of nanostructured material images using fractal descriptors

    NASA Astrophysics Data System (ADS)

    Florindo, João B.; Sikora, Mariana S.; Pereira, Ernesto C.; Bruno, Odemir M.

    2013-04-01

    This work presents a methodology to the morphology analysis and characterization of nanostructured material images acquired from FEG-SEM (Field Emission Gun-Scanning Electron Microscopy) technique. The metrics were extracted from the image texture (mathematical surface) by the volumetric fractal descriptors, a methodology based on the Bouligand-Minkowski fractal dimension, which considers the properties of the Minkowski dilation of the surface points. An experiment with galvanostatic anodic titanium oxide samples prepared in oxalyc acid solution using different conditions of applied current, oxalyc acid concentration and solution temperature was performed. The results demonstrate that the approach is capable of characterizing complex morphology characteristics such as those present in the anodic titanium oxide.

  18. Segmentation of magnetic resonance image using fractal dimension

    NASA Astrophysics Data System (ADS)

    Yau, Joseph K. K.; Wong, Sau-hoi; Chan, Kwok-Leung

    1997-04-01

    In recent years, much research has been conducted in the three-dimensional visualization of medical image. This requires a good segmentation technique. Many early works use first-order and second-order statistics. First-order statistical parameters can be calculated quickly but their effectiveness is influenced by many factors such as illumination, contrast and random noise of the image. Second-order statistical parameters, such as spatial gray level co-occurrence matrices statistics, take longer time to compute but can extract the textural information. In this investigating, two different parameters, namely the entropy and the fractal dimension, are employed to perform segmentation of the magnetic resonance images of the head of a male cadaver. The entropy is calculated from the spatial gray level co-occurrence matrices. The fractal dimension is calculated by the reticular cell counting method. Several regions of the human head are chosen for analysis. They are the bone, gyrus and lobe. Results show that the parameters are able to segment different types of tissue. The entropy gives very good result but it requires very long computation time and large amount of memory. The performance of the fractal dimension is comparable with the entropy. It is simple to estimate and demands lesser memory space.

  19. Multi-modal multi-fractal boundary encoding in object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2006-08-01

    The compact representation of region boundary contours is key to efficient representation and compression of digital images using object-based compression (OBC). In OBC, regions are coded in terms of their texture, color, and shape. Given the appropriate representation scheme, high compression ratios (e.g., 500:1 <= CR <= 2,500:1) have been reported for selected images. Because a region boundary is often represented with more parameters than the region contents, it is crucial to maximize the boundary compression ratio by reducing these parameters. Researchers have elsewhere shown that cherished boundary encoding techniques such as chain coding, simplicial complexes, or quadtrees, to name but a few, are inadequate to support OBC within the aforementioned CR range. Several existing compression standards such as MPEG support efficient boundary representation, but do not necessarily support OBC at CR >= 500:1 . Siddiqui et al. exploited concepts from fractal geometry to encode and compress region boundaries based on fractal dimension, reporting CR = 286.6:1 in one test. However, Siddiqui's algorithm is costly and appears to contain ambiguities. In this paper, we first discuss fractal dimension and OBC compression ratio, then enhance Siddiqui's algorithm, achieving significantly higher CR for a wide variety of boundary types. In particular, our algorithm smoothes a region boundary B, then extracts its inflection or control points P, which are compactly represented. The fractal dimension D is computed locally for the detrended B. By appropriate subsampling, one efficiently segments disjoint clusters of D values subject to a preselected tolerance, thereby partitioning B into a multifractal. This is accomplished using four possible compression modes. In contrast, previous researchers have characterized boundary variance with one fractal dimension, thereby producing a monofractal. At its most complex, the compressed representation contains P, a spatial marker, and a D value

  20. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  1. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  2. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review.

    PubMed

    Michallek, Florian; Dewey, Marc

    2014-01-01

    To provide an overview of recent research in fractal analysis of tissue perfusion imaging, using standard radiological and nuclear medicine imaging techniques including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) and to discuss implications for different fields of application. A systematic review of fractal analysis for tissue perfusion imaging was performed by searching the databases MEDLINE (via PubMed), EMBASE (via Ovid) and ISI Web of Science. Thirty-seven eligible studies were identified. Fractal analysis was performed on perfusion imaging of tumours, lung, myocardium, kidney, skeletal muscle and cerebral diseases. Clinically, different aspects of tumour perfusion and cerebral diseases were successfully evaluated including detection and classification. In physiological settings, it was shown that perfusion under different conditions and in various organs can be properly described using fractal analysis. Fractal analysis is a suitable method for quantifying heterogeneity from radiological and nuclear medicine perfusion images under a variety of conditions and in different organs. Further research is required to exploit physiologically proven fractal behaviour in the clinical setting. • Fractal analysis of perfusion images can be successfully performed. • Tumour, pulmonary, myocardial, renal, skeletal muscle and cerebral perfusion have already been examined. • Clinical applications of fractal analysis include tumour and brain perfusion assessment. • Fractal analysis is a suitable method for quantifying perfusion heterogeneity. • Fractal analysis requires further research concerning the development of clinical applications.

  3. Fractal analysis of high-resolution CT images as a tool for quantification of lung diseases

    SciTech Connect

    Uppaluri, R.; Mitsa, T.; Galvin, J.R.

    1995-12-31

    Fractal geometry is increasingly being used to model complex naturally occurring phenomena. There are two types of fractals in nature-geometric fractals and stochastic fractals. The pulmonary branching structure is a geometric fractal and the intensity of its grey scale image is a stochastic fractal. In this paper, the authors attempt to quantify the texture of CT lung images using properties of both types of fractals. A simple algorithm for detecting of abnormality in human lungs, based on 2-D and 3-D fractal dimensions, is presented. This method involves calculating the local fractal dimensions, based on intensities, in the 2-D slice to air edge enhancement. Following this, grey level thresholding is performed and a global fractal dimension, based on structure, for the entire data is estimated in 2-D and 3-D. High Resolution CT images of normal and abnormal lungs were analyzed. Preliminary results showed that classification of normal and abnormal images could be obtained based on the differences between their global fractal dimensions.

  4. A Digital Image Steganography using Sierpinski Gasket Fractal and PLSB

    NASA Astrophysics Data System (ADS)

    Rupa, Ch.

    2013-09-01

    Information attacks are caused due to the weaknesses in the information security. These attacks affect the non-repudiation and integrity of the security services. In this paper, a novel security approach that can defend the message from information attacks is proposed. In this new approach the original message is encrypted by Fractal Sierpinski Gasket (FSG) cryptographic algorithm, that result is hidden into an image using penultimate and least significant bit (PLSB) embedding method. This method makes the security more robust than conventional approach. Important properties of fractals are sensitivity and self similarity. These can be exploited to produce avalanche effect. Stegoanalysis of the proposed method shows that it is resistant to various attacks and stronger than the existed steganographic approaches.

  5. Fractal analysis of palmar electronographic images. Medical anthropological perspectives.

    PubMed

    Guja, Cornelia; Voinea, V; Baciu, Adina; Ciuhuţa, M; Crişan, Daniela A

    2008-01-01

    The present paper brings to the medical specialists' attention a possibility of multivalent imagistic investigation--the palmar electrographic method submitted to a totally new analysis by the fractal method. Its support for information recording is the radiosensitive film. This makes it resemble the radiological investigation, which opened the way of correlating the shape of certain structures of the organism with their function. By the specific electromagnetic impressing of the ultra photosensitive film, palmar electrography has the advantage of catching the shape of certain radiative phenomena, generated by certain structures in their functional dynamics--at the level of the human palmar tegument. This makes it resemble the EEG, EKG and EMG investigations. The purpose of this presentation is to highlight a new modality of studying the states of the human organism in its permanent adaptation to the living environment, using a new anthropological, informational vision--by fractal processing and by the couple of concepts system / interface--much closer to reality than the present systemic thinking. The human palm, which has a special medial-anthropological relevance, is analysed as a complex adaptive biological and socio-cultural interface between the internal and external environment. The fractal phenomena recorded on the image are ubicuitary in nature and especially in the living world and their shapes may he described mathematically and used for decoding their informational laws. They may have very useful implications in the medical act. The paper presents a few introductory elements to the fractal theory, and, in the final part, the pursued objectives are concretely shown by grouping the EG images according to certain more important medical-anthropological themes.

  6. Automatic classification for pathological prostate images based on fractal analysis.

    PubMed

    Huang, Po-Whei; Lee, Cheng-Hsiung

    2009-07-01

    Accurate grading for prostatic carcinoma in pathological images is important to prognosis and treatment planning. Since human grading is always time-consuming and subjective, this paper presents a computer-aided system to automatically grade pathological images according to Gleason grading system which is the most widespread method for histological grading of prostate tissues. We proposed two feature extraction methods based on fractal dimension to analyze variations of intensity and texture complexity in regions of interest. Each image can be classified into an appropriate grade by using Bayesian, k-NN, and support vector machine (SVM) classifiers, respectively. Leave-one-out and k-fold cross-validation procedures were used to estimate the correct classification rates (CCR). Experimental results show that 91.2%, 93.7%, and 93.7% CCR can be achieved by Bayesian, k-NN, and SVM classifiers, respectively, for a set of 205 pathological prostate images. If our fractal-based feature set is optimized by the sequential floating forward selection method, the CCR can be promoted up to 94.6%, 94.2%, and 94.6%, respectively, using each of the above three classifiers. Experimental results also show that our feature set is better than the feature sets extracted from multiwavelets, Gabor filters, and gray-level co-occurrence matrix methods because it has a much smaller size and still keeps the most powerful discriminating capability in grading prostate images.

  7. Comparison of the segmentation of ultrasonic image utilizing different fractal parameters

    NASA Astrophysics Data System (ADS)

    Chan, Kwok-Leung

    1995-04-01

    Statistical texture analysis has been commonly applied in the quantitative characterization of ultrasonic image. Recently, another approach begins to emerge, which uses fractal dimension. In this investigation, fractal dimension is estimated from histologically confirmed ultrasonic images using three different approaches: intensity- based, spectrum-based and reticular cell counting. The parameters are then used in image segmentation. The fractal model has the advantages that the parameter generated is stable over transformations of scale and linear transforms of intensity. From the results obtained, the performance of fractal dimension obtained by the reticular cell counting method is better than the other two approaches and comparable to the spatial grey level co-occurrence matrices statistic.

  8. Fractal dimension metric for quantifying noise texture of computed tomography images

    NASA Astrophysics Data System (ADS)

    Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly

    2017-03-01

    This study investigated a fractal dimension algorithm for noise texture quantification in CT images. Quantifying noise in CT images is important for assessing image quality. Noise is typically quantified by calculating noise standard deviation and noise power spectrum (NPS). Different reconstruction kernels and iterative reconstruction approaches affect both the noise magnitude and noise texture. The shape of the NPS can be used as a noise texture descriptor. However, the NPS requires numerous images for calculation and is a vector quantity. This study proposes the metric of fractal dimension to quantify noise texture, because fractal dimension is a single scalar metric calculated from a small number of images. Fractal dimension measures the complexity of a pattern. In this study, the ACR CT phantom was scanned and images were reconstructed using filtered back-projection with three reconstruction kernels: bone, soft and standard. Regions of interest were extracted from the uniform section of the phantom for NPS and fractal dimension calculation. The results demonstrated a mean fractal dimension of 1.86 for soft kernel, 1.92 for standard kernel, and 2.16 for bone kernel. Increasing fractal dimension corresponded to shift in the NPS towards higher spatial frequencies and grainier noise appearance. Stable fractal dimension was calculated from two ROI's compared to more than 250 ROI's used for NPS calculation. The scalar fractal dimension metric may be a useful noise texture descriptor for evaluating or optimizing reconstruction algorithms.

  9. MR imaging and osteoporosis: fractal lacunarity analysis of trabecular bone.

    PubMed

    Zaia, Annamaria; Eleonori, Roberta; Maponi, Pierluigi; Rossi, Roberto; Murri, Roberto

    2006-07-01

    We develop a method of magnetic resonance (MR) image analysis able to provide parameter(s) sensitive to bone microarchitecture changes in aging, and to osteoporosis onset and progression. The method has been built taking into account fractal properties of many anatomic and physiologic structures. Fractal lacunarity analysis has been used to determine relevant parameter(s) to differentiate among three types of trabecular bone structure (healthy young, healthy perimenopausal, and osteoporotic patients) from lumbar vertebra MR images. In particular, we propose to approximate the lacunarity function by a hyperbola model function that depends on three coefficients, alpha, beta, and gamma, and to compute these coefficients as the solution of a least squares problem. This triplet of coefficients provides a model function that better represents the variation of mass density of pixels in the image considered. Clinical application of this preliminary version of our method suggests that one of the three coefficients, beta, may represent a standard for the evaluation of trabecular bone architecture and a potentially useful parametric index for the early diagnosis of osteoporosis.

  10. Complex Flow Image Velocimetry in Shock Instabilities with Fractal Boundaries

    NASA Astrophysics Data System (ADS)

    Tellez Alvarez, Jackson David; Vila, Teresa

    2017-04-01

    We use an advanced version of Correlation Particle Image Velocimetry used in surface flows SFIV [1,2] in order to analyze the complex patterns due to the basic instabilities and boundary conditions, and to relate the production and detection of vortices, advected by fast flows with cores of low pressure. These coincide with the 3D lines of strong vorticity or helicity. For example in fast flowing rivers or laboratory experiments of environmental hydraulics [3] or shocks in compressible mixing [4]. The mixing fronts interacts with a density interface producing positive or negative baroclinic structures with varying turbulent cascades[5]. LIF Images of the thickness of the mixing zone at the centre of a shock tube, allow us to perform Multi-Fractal analysis on the evolution of the interfaces [6,7]. The interactions of the pressure fronts with balloons filled with various density gas also allow a wide range of initial conditions. In the same way, using wakes of fractal grids also modify the cascade proceses[7,8]. The three-dimensional mixing zone, its thickness and topology are important experimental measurements. The three basic cases are: when the shock wave passes from a heavy gas to a light one; from a gas to another of similar densities and from a light gas to a heavy one. We consider body forces and the effect of Baroclinic production of vorticity [5]. The Lagrangian statistics and the characterization of the topology used in SFIV analysis [1,2] is based on the Okubo-Weiss criterion which is an approximate method of partitioning the topologically distinct regions, based on the relative values of Q(x,y) = s(x,y)2 - ω(x,y)2 with s(x,y) the local shear, and ω(x,y) the local vorticity, which is obtained using DigiFlow [4] in real, or Fourier space. In order to evaluate the scale to scale transfer of energy, vorticity and helicity; descriptors of great importance in complex flow processes and intermittency, the data from numerical simulations[5,7] are compared with

  11. SAR image post-processing for the estimation of fractal parameters

    NASA Astrophysics Data System (ADS)

    Di Martino, Gerardo; Riccio, Daniele; Ruello, Giuseppe; Zinno, Ivana

    2011-11-01

    In this paper a fractal based processing for the analysis of SAR images of natural surfaces is presented. Its definition is based on a complete direct imaging model developed by the authors. The application of this innovative algorithm to SAR images makes possible to obtain complete maps of the two key parameters of a fractal scene: the fractal dimension and the increment standard deviation. The fractal parameters extraction is based on the estimation of the power spectral density of the SAR amplitude image. From a theoretic point of view, the attention is focused on the retrieving procedure of the increment standard deviation, here presented for the first time. In the last section of the paper, the application of the introduced processing to high resolution SAR images is presented, with the relevant maps of the fractal dimension and of the increment standard deviation.

  12. Fractal analysis of scatter imaging signatures to distinguish breast pathologies

    NASA Astrophysics Data System (ADS)

    Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.

    2013-02-01

    Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.

  13. Fractal analysis of AFM images of the surface of Bowman's membrane of the human cornea.

    PubMed

    Ţălu, Ştefan; Stach, Sebastian; Sueiras, Vivian; Ziebarth, Noël Marysa

    2015-04-01

    The objective of this study is to further investigate the ultrastructural details of the surface of Bowman's membrane of the human cornea, using atomic force microscopy (AFM) images. One representative image acquired of Bowman's membrane of a human cornea was investigated. The three-dimensional (3-D) surface of the sample was imaged using AFM in contact mode, while the sample was completely submerged in optisol solution. Height and deflection images were acquired at multiple scan lengths using the MFP-3D AFM system software (Asylum Research, Santa Barbara, CA), based in IGOR Pro (WaveMetrics, Lake Oswego, OR). A novel approach, based on computational algorithms for fractal analysis of surfaces applied for AFM data, was utilized to analyze the surface structure. The surfaces revealed a fractal structure at the nanometer scale. The fractal dimension, D, provided quantitative values that characterize the scale properties of surface geometry. Detailed characterization of the surface topography was obtained using statistical parameters, in accordance with ISO 25178-2: 2012. Results obtained by fractal analysis confirm the relationship between the value of the fractal dimension and the statistical surface roughness parameters. The surface structure of Bowman's membrane of the human cornea is complex. The analyzed AFM images confirm a fractal nature of the surface, which is not taken into account by classical surface statistical parameters. Surface fractal dimension could be useful in ophthalmology to quantify corneal architectural changes associated with different disease states to further our understanding of disease evolution.

  14. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    PubMed

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  15. Analysis of fractal dimensions of rat bones from film and digital images

    NASA Technical Reports Server (NTRS)

    Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Yamauchi, M.

    2001-01-01

    OBJECTIVES: (1) To compare the effect of two different intra-oral image receptors on estimates of fractal dimension; and (2) to determine the variations in fractal dimensions between the femur, tibia and humerus of the rat and between their proximal, middle and distal regions. METHODS: The left femur, tibia and humerus from 24 4-6-month-old Sprague-Dawley rats were radiographed using intra-oral film and a charge-coupled device (CCD). Films were digitized at a pixel density comparable to the CCD using a flat-bed scanner. Square regions of interest were selected from proximal, middle, and distal regions of each bone. Fractal dimensions were estimated from the slope of regression lines fitted to plots of log power against log spatial frequency. RESULTS: The fractal dimensions estimates from digitized films were significantly greater than those produced from the CCD (P=0.0008). Estimated fractal dimensions of three types of bone were not significantly different (P=0.0544); however, the three regions of bones were significantly different (P=0.0239). The fractal dimensions estimated from radiographs of the proximal and distal regions of the bones were lower than comparable estimates obtained from the middle region. CONCLUSIONS: Different types of image receptors significantly affect estimates of fractal dimension. There was no difference in the fractal dimensions of the different bones but the three regions differed significantly.

  16. Analysis of fractal dimensions of rat bones from film and digital images

    NASA Technical Reports Server (NTRS)

    Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Yamauchi, M.

    2001-01-01

    OBJECTIVES: (1) To compare the effect of two different intra-oral image receptors on estimates of fractal dimension; and (2) to determine the variations in fractal dimensions between the femur, tibia and humerus of the rat and between their proximal, middle and distal regions. METHODS: The left femur, tibia and humerus from 24 4-6-month-old Sprague-Dawley rats were radiographed using intra-oral film and a charge-coupled device (CCD). Films were digitized at a pixel density comparable to the CCD using a flat-bed scanner. Square regions of interest were selected from proximal, middle, and distal regions of each bone. Fractal dimensions were estimated from the slope of regression lines fitted to plots of log power against log spatial frequency. RESULTS: The fractal dimensions estimates from digitized films were significantly greater than those produced from the CCD (P=0.0008). Estimated fractal dimensions of three types of bone were not significantly different (P=0.0544); however, the three regions of bones were significantly different (P=0.0239). The fractal dimensions estimated from radiographs of the proximal and distal regions of the bones were lower than comparable estimates obtained from the middle region. CONCLUSIONS: Different types of image receptors significantly affect estimates of fractal dimension. There was no difference in the fractal dimensions of the different bones but the three regions differed significantly.

  17. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  18. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  19. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight (CXIDB ID 16)

    DOE Data Explorer

    Loh, N. Duane

    2012-06-20

    This deposition includes the aerosol diffraction images used for phasing, fractal morphology, and time-of-flight mass spectrometry. Files in this deposition are ordered in subdirectories that reflect the specifics.

  20. Future trends in image coding

    NASA Astrophysics Data System (ADS)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  1. Spatial and radiometric characterization of multi-spectrum satellite images through multi-fractal analysis

    NASA Astrophysics Data System (ADS)

    Alonso, Carmelo; Tarquis, Ana M.; Zúñiga, Ignacio; Benito, Rosa M.

    2017-03-01

    Several studies have shown that vegetation indexes can be used to estimate root zone soil moisture. Earth surface images, obtained by high-resolution satellites, presently give a lot of information on these indexes, based on the data of several wavelengths. Because of the potential capacity for systematic observations at various scales, remote sensing technology extends the possible data archives from the present time to several decades back. Because of this advantage, enormous efforts have been made by researchers and application specialists to delineate vegetation indexes from local scale to global scale by applying remote sensing imagery. In this work, four band images have been considered, which are involved in these vegetation indexes, and were taken by satellites Ikonos-2 and Landsat-7 of the same geographic location, to study the effect of both spatial (pixel size) and radiometric (number of bits coding the image) resolution on these wavelength bands as well as two vegetation indexes: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). In order to do so, a multi-fractal analysis of these multi-spectral images was applied in each of these bands and the two indexes derived. The results showed that spatial resolution has a similar scaling effect in the four bands, but radiometric resolution has a larger influence in blue and green bands than in red and near-infrared bands. The NDVI showed a higher sensitivity to the radiometric resolution than EVI. Both were equally affected by the spatial resolution. From both factors, the spatial resolution has a major impact in the multi-fractal spectrum for all the bands and the vegetation indexes. This information should be taken in to account when vegetation indexes based on different satellite sensors are obtained.

  2. Border extrapolation using fractal attributes in remote sensing images

    NASA Astrophysics Data System (ADS)

    Cipolletti, M. P.; Delrieux, C. A.; Perillo, G. M. E.; Piccolo, M. C.

    2014-01-01

    In management, monitoring and rational use of natural resources the knowledge of precise and updated information is essential. Satellite images have become an attractive option for quantitative data extraction and morphologic studies, assuring a wide coverage without exerting negative environmental influence over the study area. However, the precision of such practice is limited by the spatial resolution of the sensors and the additional processing algorithms. The use of high resolution imagery (i.e., Ikonos) is very expensive for studies involving large geographic areas or requiring long term monitoring, while the use of less expensive or freely available imagery poses a limit in the geographic accuracy and physical precision that may be obtained. We developed a methodology for accurate border estimation that can be used for establishing high quality measurements with low resolution imagery. The method is based on the original theory by Richardson, taking advantage of the fractal nature of geographic features. The area of interest is downsampled at different scales and, at each scale, the border is segmented and measured. Finally, a regression of the dependence of the measured length with respect to scale is computed, which then allows for a precise extrapolation of the expected length at scales much finer than the originally available. The method is tested with both synthetic and satellite imagery, producing accurate results in both cases.

  3. Local fuzzy fractal dimension and its application in medical image processing.

    PubMed

    Zhuang, Xiaodong; Meng, Qingchun

    2004-09-01

    The local fuzzy fractal dimension (LFFD) is proposed to extract local fractal feature of medical images. The definition of LFFD is an extension of the pixel-covering method by incorporating the fuzzy set. Multi-feature edge detection is implemented with the LFFD and the Sobel operator. The LFFD can also serve as a characteristic of motion in medical image sequences. The experimental results show that the LFFD is an important feature of edge areas in medical images and can provide information for segmentation of echocardiogram image sequences.

  4. Simple fractal method of assessment of histological images for application in medical diagnostics

    PubMed Central

    2010-01-01

    We propose new method of assessment of histological images for medical diagnostics. 2-D image is preprocessed to form 1-D landscapes or 1-D signature of the image contour and then their complexity is analyzed using Higuchi's fractal dimension method. The method may have broad medical application, from choosing implant materials to differentiation between benign masses and malignant breast tumors. PMID:21134258

  5. On the fractal distribution of primes and prime-indexed primes by the binary image analysis

    NASA Astrophysics Data System (ADS)

    Cattani, Carlo; Ciancio, Armando

    2016-10-01

    In this paper, the distribution of primes and prime-indexed primes (PIPs) is studied by mapping primes into a binary image which visualizes the distribution of primes. These images show that the distribution of primes (and PIPs) is similar to a Cantor dust, moreover the self-similarity with respect to the order of PIPs (already proven in Batchko (2014)) can be seen as an invariance of the binary images. The index of primes plays the same role of the scale for fractals, so that with respect to the index the distribution of prime-indexed primes is characterized by the self-similarity alike any other fractal. In particular, in order to single out the scale dependence, the PIPs fractal distribution will be evaluated by limiting to two parameters, fractal dimension (δ) and lacunarity (λ), that are usually used to measure the fractal nature. Because of the invariance of the corresponding binary plots, the fractal dimension and lacunarity of primes distribution are invariant with respect to the index of PIPs.

  6. Image characterization by fractal descriptors in variational mode decomposition domain: Application to brain magnetic resonance

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-08-01

    The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.

  7. Fractal scaling of apparent soil moisture estimated from vertical planes of Vertisol pit images

    NASA Astrophysics Data System (ADS)

    Cumbrera, Ramiro; Tarquis, Ana M.; Gascó, Gabriel; Millán, Humberto

    2012-07-01

    SummaryImage analysis could be a useful tool for investigating the spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to define apparent soil moisture patterns from vertical planes of Vertisol pit images and (ii) to describe the scaling of apparent soil moisture distribution using fractal parameters. Twelve soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. Six of them were excavated in April/2011 and six pits were established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak™ digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ≈373 μm of the photographed soil pit. Each soil image was analyzed using two fractal scaling exponents, box counting (capacity) dimension (DBC) and interface fractal dimension (Di), and three prefractal scaling coefficients, the total number of boxes intercepting the foreground pattern at a unit scale (A), fractal lacunarity at the unit scale (Λ1) and Shannon entropy at the unit scale (S1). All the scaling parameters identified significant differences between both sets of spatial patterns. Fractal lacunarity was the best discriminator between apparent soil moisture patterns. Soil image interpretation with fractal exponents and prefractal coefficients can be incorporated within a site-specific agriculture toolbox. While fractal exponents convey information on space filling characteristics of the pattern, prefractal coefficients represent the investigated soil property as seen through a higher resolution microscope. In spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used in connection with traditional soil moisture sampling, which always renders punctual estimates.

  8. Fractal signature and lacunarity in the measurement of the texture of trabecular bone in clinical CT images.

    PubMed

    Dougherty, G; Henebry, G M

    2001-07-01

    Fractal analysis is a method of characterizing complex shapes such as the trabecular structure of bone. Numerous algorithms for estimating fractal dimension have been described, but the Fourier power spectrum method is particularly applicable to self-affine fractals, and facilitates corrections for the effects of noise and blurring in an image. We found that it provided accurate estimates of fractal dimension for synthesized fractal images. For natural texture images fractality is limited to a range of scales, and the fractal dimension as a function of spatial frequency presents as a fractal signature. We found that the fractal signature was more successful at discriminating between these textures than either the global fractal dimension or other metrics such as the mean width and root-mean-square width of the spectral density plots. Different natural textures were also readily distinguishable using lacunarity plots, which explicitly characterize the average size and spatial organization of structural sub-units within an image. The fractal signatures of small regions of interest (32x32 pixels), computed in the frequency domain after corrections for imaging system noise and MTF, were able to characterize the texture of vertebral trabecular bone in CT images. Even small differences in texture due to acquisition slice thickness resulted in measurably different fractal signatures. These differences were also readily apparent in lacunarity plots, which indicated that a slice thickness of 1 mm or less is necessary if essential architectural information is not to be lost. Since lacunarity measures gap size and is not predicated on fractality, it may be particularly useful for characterizing the texture of trabecular bone.

  9. Scalable coding of encrypted images.

    PubMed

    Zhang, Xinpeng; Feng, Guorui; Ren, Yanli; Qian, Zhenxing

    2012-06-01

    This paper proposes a novel scheme of scalable coding for encrypted images. In the encryption phase, the original pixel values are masked by a modulo-256 addition with pseudorandom numbers that are derived from a secret key. After decomposing the encrypted data into a downsampled subimage and several data sets with a multiple-resolution construction, an encoder quantizes the subimage and the Hadamard coefficients of each data set to reduce the data amount. Then, the data of quantized subimage and coefficients are regarded as a set of bitstreams. At the receiver side, while a subimage is decrypted to provide the rough information of the original content, the quantized coefficients can be used to reconstruct the detailed content with an iteratively updating procedure. Because of the hierarchical coding mechanism, the principal original content with higher resolution can be reconstructed when more bitstreams are received.

  10. Fractal Dimension Estimation of RGB Color Images Using Maximum Color Distance

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Wang, Xingyuan

    2016-08-01

    Natural images exhibit a high degree of complexity, randomness and irregularity in color and texture, however fractal can be an effective tool to describe various irregular phenomena in nature. Fractal dimensions are important because they can be defined in connection with real-world data, and they can be measured approximately by means of experiments. In this paper, we proposed a fractal dimension estimation method for RGB color images. In the proposed method, we present a hyper-surface partition method which considers the hyper-surface as continuous and divide the image into nonoverlapped blocks. We also defined a counting method in color domain. To validate the proposed method, experiments were carried on two types of color images: synthesized fractal images and natural RGB color images. The experimental results demonstrate that the proposed method is effective and efficient. The behaviors of the proposed method on the rescale images are also shown in the paper. And it can be performed as a reliable FD estimation approach for the RGB color images.

  11. Image coding with geometric wavelets.

    PubMed

    Alani, Dror; Averbuch, Amir; Dekel, Shai

    2007-01-01

    This paper describes a new and efficient method for low bit-rate image coding which is based on recent development in the theory of multivariate nonlinear piecewise polynomial approximation. It combines a binary space partition scheme with geometric wavelet (GW) tree approximation so as to efficiently capture curve singularities and provide a sparse representation of the image. The GW method successfully competes with state-of-the-art wavelet methods such as the EZW, SPIHT, and EBCOT algorithms. We report a gain of about 0.4 dB over the SPIHT and EBCOT algorithms at the bit-rate 0.0625 bits-per-pixels (bpp). It also outperforms other recent methods that are based on "sparse geometric representation." For example, we report a gain of 0.27 dB over the Bandelets algorithm at 0.1 bpp. Although the algorithm is computationally intensive, its time complexity can be significantely reduced by collecting a "global" GW n-term approximation to the image from a collection of GW trees, each constructed separately over tiles of the image.

  12. A Comparison of Local Variance, Fractal Dimension, and Moran's I as Aids to Multispectral Image Classification

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.

    2004-01-01

    The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.

  13. A Comparison of Local Variance, Fractal Dimension, and Moran's I as Aids to Multispectral Image Classification

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.

    2004-01-01

    The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.

  14. Improved triangular prism methods for fractal analysis of remotely sensed images

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Fung, Tung; Leung, Yee

    2016-05-01

    Feature extraction has been a major area of research in remote sensing, and fractal feature is a natural characterization of complex objects across scales. Extending on the modified triangular prism (MTP) method, we systematically discuss three factors closely related to the estimation of fractal dimensions of remotely sensed images. They are namely the (F1) number of steps, (F2) step size, and (F3) estimation accuracy of the facets' areas of the triangular prisms. Differing from the existing improved algorithms that separately consider these factors, we simultaneously take all factors to construct three new algorithms, namely the modification of the eight-pixel algorithm, the four corner and the moving-average MTP. Numerical experiments based on 4000 generated images show their superior performances over existing algorithms: our algorithms not only overcome the limitation of image size suffered by existing algorithms but also obtain similar average fractal dimension with smaller standard deviation, only 50% for images with high fractal dimensions. In the case of real-life application, our algorithms more likely obtain fractal dimensions within the theoretical range. Thus, the fractal nature uncovered by our algorithms is more reasonable in quantifying the complexity of remotely sensed images. Despite the similar performance of these three new algorithms, the moving-average MTP can mitigate the sensitivity of the MTP to noise and extreme values. Based on the numerical and real-life case study, we check the effect of the three factors, (F1)-(F3), and demonstrate that these three factors can be simultaneously considered for improving the performance of the MTP method.

  15. Document image retrieval through word shape coding.

    PubMed

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  16. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  17. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    PubMed Central

    de Castro, J.; Méndez, A.; Tarquis, A. M.

    2014-01-01

    The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957

  18. Automatic Method to Classify Images Based on Multiscale Fractal Descriptors and Paraconsistent Logic

    NASA Astrophysics Data System (ADS)

    Pavarino, E.; Neves, L. A.; Nascimento, M. Z.; Godoy, M. F.; Arruda, P. F.; Neto, D. S.

    2015-01-01

    In this study is presented an automatic method to classify images from fractal descriptors as decision rules, such as multiscale fractal dimension and lacunarity. The proposed methodology was divided in three steps: quantification of the regions of interest with fractal dimension and lacunarity, techniques under a multiscale approach; definition of reference patterns, which are the limits of each studied group; and, classification of each group, considering the combination of the reference patterns with signals maximization (an approach commonly considered in paraconsistent logic). The proposed method was used to classify histological prostatic images, aiming the diagnostic of prostate cancer. The accuracy levels were important, overcoming those obtained with Support Vector Machine (SVM) and Best- first Decicion Tree (BFTree) classifiers. The proposed approach allows recognize and classify patterns, offering the advantage of giving comprehensive results to the specialists.

  19. Chaos and Fractals.

    ERIC Educational Resources Information Center

    Barton, Ray

    1990-01-01

    Presented is an educational game called "The Chaos Game" which produces complicated fractal images. Two basic computer programs are included. The production of fractal images by the Sierpinski gasket and the Chaos Game programs is discussed. (CW)

  20. Chaos and Fractals.

    ERIC Educational Resources Information Center

    Barton, Ray

    1990-01-01

    Presented is an educational game called "The Chaos Game" which produces complicated fractal images. Two basic computer programs are included. The production of fractal images by the Sierpinski gasket and the Chaos Game programs is discussed. (CW)

  1. Efficiency of a model human image code

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1987-01-01

    Hypothetical schemes for neural representation of visual information can be expressed as explicit image codes. Here, a code modeled on the simple cells of the primate striate cortex is explored. The Cortex transform maps a digital image into a set of subimages (layers) that are bandpass in spatial frequency and orientation. The layers are sampled so as to minimize the number of samples and still avoid aliasing. Samples are quantized in a manner that exploits the bandpass contrast-masking properties of human vision. The entropy of the samples is computed to provide a lower bound on the code size. Finally, the image is reconstructed from the code. Psychophysical methods are derived for comparing the original and reconstructed images to evaluate the sufficiency of the code. When each resolution is coded at the threshold for detection artifacts, the image-code size is about 1 bit/pixel.

  2. Buried mine detection using fractal geometry analysis to the LWIR successive line scan data image

    NASA Astrophysics Data System (ADS)

    Araki, Kan

    2012-06-01

    We have engaged in research on buried mine/IED detection by remote sensing method using LWIR camera. A IR image of a ground, containing buried objects can be assumed as a superimposed pattern including thermal scattering which may depend on the ground surface roughness, vegetation canopy, and effect of the sun light, and radiation due to various heat interaction caused by differences in specific heat, size, and buried depth of the objects and local temperature of their surrounding environment. In this cumbersome environment, we introduce fractal geometry for analyzing from an IR image. Clutter patterns due to these complex elements have oftentimes low ordered fractal dimension of Hausdorff Dimension. On the other hand, the target patterns have its tendency of obtaining higher ordered fractal dimension in terms of Information Dimension. Random Shuffle Surrogate method or Fourier Transform Surrogate method is used to evaluate fractional statistics by applying shuffle of time sequence data or phase of spectrum. Fractal interpolation to each line scan was also applied to improve the signal processing performance in order to evade zero division and enhance information of data. Some results of target extraction by using relationship between low and high ordered fractal dimension are to be presented.

  3. [Influence of image process on fractal morphology characterization of NAPLs vertical fingering flow].

    PubMed

    Li, Hui-Ying; Du, Xiao-Ming; Yang, Bin; Wu, Bin; Xu, Zhu; Shi, Yi; Fang, Ji-Dun; Li, Fa-Sheng

    2013-11-01

    Dyes are frequently used to visualize fingering flow pathways, where the image process has an important role in the result analysis. The theory of fractal geometry is applied to give quantitative description of the stain patterns via image analysis, which is helpful for finger characterization and prediction. This description typically involves two parameters, a mass fractal dimension (D(m)) relative to the area, and a surface fractal dimension (D(s)) relative to the perimeter. This work detailed analyzes the influence of various choices during the thresholding step that transformed the origin color images to binary ones which are needed in the fractal analysis. One hundred and thirty images were obtained from laboratory two-dimension sand box infiltration experiments of four dyed non-aqueous phase liquids. Detailed comparisons of D(m) and D(s) were made respectively, considering a set of threshold algorithms and the filling of lakes. Results indicate that adjustments of the saturation threshold influence are less on both D(m) and D(s) in the laboratory experiments. The brightness threshold adjustments decrease the D(m) by 0.02 and increase the D(s) by 0.05. Filling lakes influence the D(m) less while the D(s) decrease by 0.10. Therefore the D(m) was recommended for further analysis to avoid subjective choices' influence in the image process.

  4. On the fractal geometry of DNA by the binary image analysis.

    PubMed

    Cattani, Carlo; Pierro, Gaetano

    2013-09-01

    The multifractal analysis of binary images of DNA is studied in order to define a methodological approach to the classification of DNA sequences. This method is based on the computation of some multifractality parameters on a suitable binary image of DNA, which takes into account the nucleotide distribution. The binary image of DNA is obtained by a dot-plot (recurrence plot) of the indicator matrix. The fractal geometry of these images is characterized by fractal dimension (FD), lacunarity, and succolarity. These parameters are compared with some other coefficients such as complexity and Shannon information entropy. It will be shown that the complexity parameters are more or less equivalent to FD, while the parameters of multifractality have different values in the sense that sequences with higher FD might have lower lacunarity and/or succolarity. In particular, the genome of Drosophila melanogaster has been considered by focusing on the chromosome 3r, which shows the highest fractality with a corresponding higher level of complexity. We will single out some results on the nucleotide distribution in 3r with respect to complexity and fractality. In particular, we will show that sequences with higher FD also have a higher frequency distribution of guanine, while low FD is characterized by the higher presence of adenine.

  5. Marker-free phenotyping of tumor cells by fractal analysis of reflection interference contrast microscopy images.

    PubMed

    Klein, Katharina; Maier, Timo; Hirschfeld-Warneken, Vera C; Spatz, Joachim P

    2013-01-01

    Phenotyping of tumor cells by marker-free quantification is important for cancer diagnostics. For the first time, fractal analysis of reflection interference contrast microscopy images of single living cells was employed as a new method to distinguish between different nanoscopic membrane features of tumor cells. Since tumor progression correlates with a higher degree of chaos within the cell, it can be quantified mathematically by fractality. Our results show a high accuracy in identifying malignant cells with a failure chance of 3%, which is far better than today's applied methods.

  6. Fractal and generalized Fokker–Planck equations: description of the characterization of anomalous diffusion in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Sau Fa, Kwok

    2017-03-01

    Recently, fractal and generalized Fokker–Planck equations have been the subject of considerable interest. In this work, the fractal and generalized Fokker–Planck equations connected with the Langevin equation and continuous time random walk model are considered. Descriptions and applications of these models to the fixed samples of the mouse brain using magnetic resonance imaging (MRI) are discussed.

  7. Scalable still image coding based on wavelet

    NASA Astrophysics Data System (ADS)

    Yan, Yang; Zhang, Zhengbing

    2005-02-01

    The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.

  8. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, D.; Kwatra, S. C.

    1992-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.

  9. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.

  10. Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors.

    PubMed

    da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2015-01-01

    The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification.

  11. Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors

    PubMed Central

    da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2015-01-01

    The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification. PMID:26091501

  12. Blockiness in JPEG-coded images

    NASA Astrophysics Data System (ADS)

    Meesters, Lydia; Martens, Jean-Bernard

    1999-05-01

    In two experiments, dissimilarity data and numerical scaling data were obtained to determine the underlying attributes of image quality in baseline sequential JPEG coded imags. Although several distortions were perceived, i.e., blockiness, ringing and blur, the subjective data for all attributes where highly correlated, so that image quality could approximately be described by one independent attribute. We therefore proceeded by developing an instrumental measure for one of these distortions, i.e., blockiness. In this paper a single-ended blockiness measure is proposed, i.e., one that uses only the coded image. Our approach is therefore fundamentally different from most image quality models that use both the original and the degraded image. The measure is based on detecting the low- amplitude edges that result from blocking and estimating the amplitudes. Because of the approximate 1D of the underlying psychological space, the proposed blockiness measure also predicts the image quality of sequential baseline coded JPEG images.

  13. Advanced Imaging Optics Utilizing Wavefront Coding.

    SciTech Connect

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  14. Code-excited linear predictive coding of multispectral MR images

    NASA Astrophysics Data System (ADS)

    Hu, Jian-Hong; Wang, Yao; Cahill, Patrick

    1996-02-01

    This paper reports a multispectral code excited linear predictive coding method for the compression of well-registered multispectral MR images. Different linear prediction models and the adaptation schemes have been compared. The method which uses forward adaptive autoregressive (AR) model has proven to achieve a good compromise between performance, complexity and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over non-overlapping square macroblocks. Each macro-block is further divided into several micro-blocks and, the best excitation signals for each microblock are determined through an analysis-by-synthesis procedure. To satisfy the high quality requirement for medical images, the error between the original images and the synthesized ones are further specified using a vector quantizer. The MFCELP method has been applied to 26 sets of clinical MR neuro images (20 slices/set, 3 spectral bands/slice, 256 by 256 pixels/image, 12 bits/pixel). It provides a significant improvement over the discrete cosine transform (DCT) based JPEG method, a wavelet transform based embedded zero-tree wavelet (EZW) coding method, as well as the MSARMA method we developed before.

  15. An edge preserving differential image coding scheme

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1992-01-01

    Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.

  16. An edge preserving differential image coding scheme

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility especially when coding medical or scientific images, where edge preservation is of utmost importance. We present a simple, easy to implement differential image coding system with excellent edge preservation properties. The coding system can be used over variable rate channels which makes it especially attractive for use in the packet network environment.

  17. Detecting abrupt dynamic change based on changes in the fractal properties of spatial images

    NASA Astrophysics Data System (ADS)

    Liu, Qunqun; He, Wenping; Gu, Bin; Jiang, Yundi

    2016-08-01

    Many abrupt climate change events often cannot be detected timely by conventional abrupt detection methods until a few years after these events have occurred. The reason for this lag in detection is that abundant and long-term observational data are required for accurate abrupt change detection by these methods, especially for the detection of a regime shift. So, these methods cannot help us understand and forecast the evolution of the climate system in a timely manner. Obviously, spatial images, generated by a coupled spatiotemporal dynamical model, contain more information about a dynamic system than a single time series, and we find that spatial images show the fractal properties. The fractal properties of spatial images can be quantitatively characterized by the Hurst exponent, which can be estimated by two-dimensional detrended fluctuation analysis (TD-DFA). Based on this, TD-DFA is used to detect an abrupt dynamic change of a coupled spatiotemporal model. The results show that the TD-DFA method can effectively detect abrupt parameter changes in the coupled model by monitoring the changing in the fractal properties of spatial images. The present method provides a new way for abrupt dynamic change detection, which can achieve timely and efficient abrupt change detection results.

  18. The design of wavefront coded imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Shun; Cen, Zhaofeng; Li, Xiaotong

    2016-10-01

    Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.

  19. Improving spatial adaptivity of nonlocal means in low-dosed CT imaging using pointwise fractal dimension.

    PubMed

    Zheng, Xiuqing; Liao, Zhiwu; Hu, Shaoxiang; Li, Ming; Zhou, Jiliu

    2013-01-01

    NLMs is a state-of-art image denoising method; however, it sometimes oversmoothes anatomical features in low-dose CT (LDCT) imaging. In this paper, we propose a simple way to improve the spatial adaptivity (SA) of NLMs using pointwise fractal dimension (PWFD). Unlike existing fractal image dimensions that are computed on the whole images or blocks of images, the new PWFD, named pointwise box-counting dimension (PWBCD), is computed for each image pixel. PWBCD uses a fixed size local window centered at the considered image pixel to fit the different local structures of images. Then based on PWBCD, a new method that uses PWBCD to improve SA of NLMs directly is proposed. That is, PWBCD is combined with the weight of the difference between local comparison windows for NLMs. Smoothing results for test images and real sinograms show that PWBCD-NLMs with well-chosen parameters can preserve anatomical features better while suppressing the noises efficiently. In addition, PWBCD-NLMs also has better performance both in visual quality and peak signal to noise ratio (PSNR) than NLMs in LDCT imaging.

  20. Adaptive discrete cosine transform based image coding

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Luoh, Shyan-Wen

    1996-04-01

    In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.

  1. Alzheimer's Disease Detection in Brain Magnetic Resonance Images Using Multiscale Fractal Analysis

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    We present a new automated system for the detection of brain magnetic resonance images (MRI) affected by Alzheimer's disease (AD). The MRI is analyzed by means of multiscale analysis (MSA) to obtain its fractals at six different scales. The extracted fractals are used as features to differentiate healthy brain MRI from those of AD by a support vector machine (SVM) classifier. The result of classifying 93 brain MRIs consisting of 51 images of healthy brains and 42 of brains affected by AD, using leave-one-out cross-validation method, yielded 99.18% ± 0.01 classification accuracy, 100% sensitivity, and 98.20% ± 0.02 specificity. These results and a processing time of 5.64 seconds indicate that the proposed approach may be an efficient diagnostic aid for radiologists in the screening for AD. PMID:24967286

  2. Asymmetries in the direction of saccades during perception of scenes and fractals: effects of image type and image features.

    PubMed

    Foulsham, Tom; Kingstone, Alan

    2010-04-07

    The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance. Copyright 2010 Elsevier Ltd. All rights reserved.

  3. Ultrasound strain imaging using Barker code

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  4. The influence of edge detection algorithms on the estimation of the fractal dimension of binary digital images

    NASA Astrophysics Data System (ADS)

    Ahammer, Helmut; DeVaney, Trevor T. J.

    2004-03-01

    The boundary of a fractal object, represented in a two-dimensional space, is theoretically a line with an infinitely small width. In digital images this boundary or contour is limited to the pixel resolution of the image and the width of the line commonly depends on the edge detection algorithm used. The Minkowski dimension was evaluated by using three different edge detection algorithms (Sobel, Roberts, and Laplace operator). These three operators were investigated because they are very widely used and because their edge detection result is very distinct concerning the line width. Very common fractals (Sierpinski carpet and Koch islands) were investigated as well as the binary images from a cancer invasion assay taken with a confocal laser scanning microscope. The fractal dimension is directly proportional to the width of the contour line and the fact, that in practice very often the investigated objects are fractals only within a limited resolution range is considered too.

  5. The influence of edge detection algorithms on the estimation of the fractal dimension of binary digital images.

    PubMed

    Ahammer, Helmut; DeVaney, Trevor T J

    2004-03-01

    The boundary of a fractal object, represented in a two-dimensional space, is theoretically a line with an infinitely small width. In digital images this boundary or contour is limited to the pixel resolution of the image and the width of the line commonly depends on the edge detection algorithm used. The Minkowski dimension was evaluated by using three different edge detection algorithms (Sobel, Roberts, and Laplace operator). These three operators were investigated because they are very widely used and because their edge detection result is very distinct concerning the line width. Very common fractals (Sierpinski carpet and Koch islands) were investigated as well as the binary images from a cancer invasion assay taken with a confocal laser scanning microscope. The fractal dimension is directly proportional to the width of the contour line and the fact, that in practice very often the investigated objects are fractals only within a limited resolution range is considered too.

  6. A fractal derivative model for the characterization of anomalous diffusion in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.

    2016-10-01

    Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the

  7. Fractal analyses of osseous healing using Tuned Aperture Computed Tomography images

    PubMed Central

    Seyedain, Ali; Webber, Richard L.; Nair, Umadevi P.; Piesco, Nicholas P.; Agarwal, Sudha; Mooney, Mark P.; Gröndahl, Hans-Göran

    2016-01-01

    The aim of this study was to evaluate osseous healing in mandibular defects using fractal analyses on conventional radiographs and tuned aperture computed tomography (TACT; OrthoTACT, Instrumentarium Imaging, Helsinki, Finland) images. Eighty test sites on the inferior margins of rabbit mandibles were subject to lesion induction and treated with one of the following: no treatment (controls); osteoblasts only; polymer matrix only; or osteoblast-polymer matrix (OPM) combination. Images were acquired using conventional radiography and TACT, including unprocessed TACT (TACT-U) and iteratively restored TACT (TACT-IR). Healing was followed up over time and images acquired at 3, 6, 9, and 12 weeks post-surgery. Fractal dimension (FD) was computed within regions of interest in the defects using the TACT workbench. Results were analyzed for effects produced by imaging modality, treatment modality, time after surgery and lesion location. Histomorphometric data were available to assess ground truth. Significant differences (p < 0.0001) were noted based on imaging modality with TACT-IR recording the highest mean fractal dimension (MFD), followed by TACT-U and conventional images, in that order. Sites treated with OPM recorded the highest MFDs among all treatment modalities (p < 0.0001). The highest MFD based on time was recorded at 3 weeks and differed significantly with 12 weeks (p < 0.035). Correlation of FD with results of histomorphometric data was high (r = 0.79; p < 0.001). The FD computed on TACT-IR showed the highest correlation with histomorphometric data, thus establishing the fact TACT is a more efficient and accurate imaging modality for quantification of osseous changes within healing bony defects. PMID:11519567

  8. Classification of diabetic retinopathy using fractal dimension analysis of eye fundus image

    NASA Astrophysics Data System (ADS)

    Safitri, Diah Wahyu; Juniati, Dwi

    2017-08-01

    Diabetes Mellitus (DM) is a metabolic disorder when pancreas produce inadequate insulin or a condition when body resist insulin action, so the blood glucose level is high. One of the most common complications of diabetes mellitus is diabetic retinopathy which can lead to a vision problem. Diabetic retinopathy can be recognized by an abnormality in eye fundus. Those abnormalities are characterized by microaneurysms, hemorrhage, hard exudate, cotton wool spots, and venous's changes. The diabetic retinopathy is classified depends on the conditions of abnormality in eye fundus, that is grade 1 if there is a microaneurysm only in the eye fundus; grade 2, if there are a microaneurysm and a hemorrhage in eye fundus; and grade 3: if there are microaneurysm, hemorrhage, and neovascularization in the eye fundus. This study proposed a method and a process of eye fundus image to classify of diabetic retinopathy using fractal analysis and K-Nearest Neighbor (KNN). The first phase was image segmentation process using green channel, CLAHE, morphological opening, matched filter, masking, and morphological opening binary image. After segmentation process, its fractal dimension was calculated using box-counting method and the values of fractal dimension were analyzed to make a classification of diabetic retinopathy. Tests carried out by used k-fold cross validation method with k=5. In each test used 10 different grade K of KNN. The accuracy of the result of this method is 89,17% with K=3 or K=4, it was the best results than others K value. Based on this results, it can be concluded that the classification of diabetic retinopathy using fractal analysis and KNN had a good performance.

  9. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  10. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  11. Fractal evaluation of drug amorphicity from optical and scanning electron microscope images

    NASA Astrophysics Data System (ADS)

    Gavriloaia, Bogdan-Mihai G.; Vizireanu, Radu C.; Neamtu, Catalin I.; Gavriloaia, Gheorghe V.

    2013-09-01

    Amorphous materials are metastable, more reactive than the crystalline ones, and have to be evaluated before pharmaceutical compound formulation. Amorphicity is interpreted as a spatial chaos, and patterns of molecular aggregates of dexamethasone, D, were investigated in this paper by using fractal dimension, FD. Images having three magnifications of D were taken from an optical microscope, OM, and with eight magnifications, from a scanning electron microscope, SEM, were analyzed. The average FD for pattern irregularities of OM images was 1.538, and about 1.692 for SEM images. The FDs of the two kinds of images are less sensitive of threshold level. 3D images were shown to illustrate dependence of FD of threshold and magnification level. As a result, optical image of single scale is enough to characterize the drug amorphicity. As a result, the OM image at a single scale is enough to characterize the amorphicity of D.

  12. Multi-shot compressed coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Du, Juan; Wu, Tengfei; Jin, Zhenhua

    2013-09-01

    The classical methods of compressed coded aperture (CCA) still require an optical sensor with high resolution, although the sampling rate has broken the Nyquist sampling rate already. A novel architecture of multi-shot compressed coded aperture imaging (MCCAI) using a low resolution optical sensor is proposed, which is mainly based on the 4-f imaging system, combining with two spatial light modulators (SLM) to achieve the compressive imaging goal. The first SLM employed for random convolution is placed at the frequency spectrum plane of the 4-f imaging system, while the second SLM worked as a selecting filter is positioned in front of the optical sensor. By altering the random coded pattern of the second SLM and sampling, a couple of observations can be obtained by a low resolution optical sensor easily, and these observations will be combined mathematically and used to reconstruct the high resolution image. That is to say, MCCAI aims at realizing the super resolution imaging with multiple random samplings by using a low resolution optical sensor. To improve the computational imaging performance, total variation (TV) regularization is introduced into the super resolution reconstruction model to get rid of the artifacts, and alternating direction method of multipliers (ADM) is utilized to solve the optimal result efficiently. The results show that the MCCAI architecture is suitable for super resolution computational imaging using a much lower resolution optical sensor than traditional CCA imaging methods by capturing multiple frame images.

  13. Scaling properties and fractality in the distribution of coding segments in eukaryotic genomes revealed through a block entropy approach

    NASA Astrophysics Data System (ADS)

    Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis

    2010-11-01

    Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in

  14. Predictive depth coding of wavelet transformed images

    NASA Astrophysics Data System (ADS)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  15. Fractal analysis of elastographic images for automatic detection of diffuse diseases of salivary glands: preliminary results.

    PubMed

    Badea, Alexandru Florin; Lupsor Platon, Monica; Crisan, Maria; Cattani, Carlo; Badea, Iulia; Pierro, Gaetano; Sannino, Gianpaolo; Baciut, Grigore

    2013-01-01

    The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD). It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of "real-time" elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology.

  16. Fractal Analysis of Elastographic Images for Automatic Detection of Diffuse Diseases of Salivary Glands: Preliminary Results

    PubMed Central

    Badea, Alexandru Florin; Lupsor Platon, Monica; Crisan, Maria; Cattani, Carlo; Badea, Iulia; Pierro, Gaetano; Sannino, Gianpaolo; Baciut, Grigore

    2013-01-01

    The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD). It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of “real-time” elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology. PMID:23762183

  17. Image compression with embedded multiwavelet coding

    NASA Astrophysics Data System (ADS)

    Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay

    1996-03-01

    An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.

  18. Fractal and digital image processing to determine the degree of dispersion of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Liang, Xiao-ning; Li, Wei

    2016-05-01

    The degree of dispersion is an important parameter to quantitatively study properties of carbon nanotube composites. Among the many methods for studying dispersion, scanning electron microscopy, transmission electron microscopy, and atomic force microscopy are the most commonly used, intuitive, and convincing methods. However, they have the disadvantage of not being quantitative. To overcome this disadvantage, the fractal theory and digital image processing method can be used to provide a quantitative analysis of the morphology and properties of carbon nanotube composites. In this paper, the dispersion degree of carbon nanotubes was investigated using two fractal methods, namely, the box-counting method and the differential box-counting method. On the basis of the results, we propose a new method for the quantitative characterization of the degree of dispersion of carbon nanotubes. This hierarchical grid method can be used as a supplementary method, and can be combined with the fractal calculation method. Thus, the accuracy and effectiveness of the quantitative characterization of the dispersion degree of carbon nanotubes can be improved. (The outer diameter of the carbon nanotubes is about 50 nm; the length of the carbon nanotubes is 10-20 μm.)

  19. [Fractal analysis of trabecular architecture: with special reference to slice thickness and pixel size of the image].

    PubMed

    Tomomitsu, Tatsushi; Mimura, Hiroaki; Murase, Kenya; Tamada, Tsutomu; Sone, Teruki; Fukunaga, Masao

    2005-06-20

    Many analyses of bone microarchitecture using three-dimensional images of micro CT (microCT) have been reported recently. However, as extirpated bone is the subject of measurement on microCT, various kinds of information are not available clinically. Our aim is to evaluate usefulness of fractal dimension as an index of bone strength different from bone mineral density in in-vivo, to which microCT could not be applied. In this fundamental study, the relation between pixel size and the slice thickness of images was examined when fractal analysis was applied to clinical images. We examined 40 lumbar spine specimens extirpated from 16 male cadavers (30-88 years; mean age, 60.8 years). Three-dimensional images of the trabeculae of 150 slices were obtained by a microCT system under the following conditions: matrix size, 512 x 512; slice thickness, 23.2 em; and pixel size, 18.6 em. Based on images of 150 slices, images of four different matrix sizes and nine different slice thicknesses were made using public domain software (NIH Image). The threshold value for image binarization, and the relation between pixel size and the slice thickness of an image used for two-dimensional and three-dimensional fractal analyses were studied. In addition, the box counting method was used for fractal analysis. One hundred forty-five in box counting was most suitable as the threshold value for image binarization on the 256 gray levels. The correlation coefficients between two-dimensional fractal dimensions of processed images and three-dimensional fractal dimensions of original images were more than 0.9 for pixel sizes < or =148.8 microm at a slice thickness of 1 mm, and < or =74.4 microm at one of 2 mm. In terms of the relation between the three-dimensional fractal dimension of processed images and three-dimensional fractal dimension of original images, when pixel size was less than 74.4 microm, a correlation coefficient of more than 0.9 was obtained even for the maximal slice thickness

  20. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  1. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  2. Fractal analysis and its impact factors on pore structure of artificial cores based on the images obtained using magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong

    2012-11-01

    Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).

  3. Fractal lacunarity of trabecular bone and magnetic resonance imaging: New perspectives for osteoporotic fracture risk assessment

    PubMed Central

    Zaia, Annamaria

    2015-01-01

    Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest

  4. Fractal lacunarity of trabecular bone and magnetic resonance imaging: New perspectives for osteoporotic fracture risk assessment.

    PubMed

    Zaia, Annamaria

    2015-03-18

    Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest

  5. Vector lifting schemes for stereo image coding.

    PubMed

    Kaaniche, Mounir; Benazza-Benyahia, Amel; Pesquet-Popescu, Béatrice; Pesquet, Jean-Christophe

    2009-11-01

    Many research efforts have been devoted to the improvement of stereo image coding techniques for storage or transmission. In this paper, we are mainly interested in lossy-to-lossless coding schemes for stereo images allowing progressive reconstruction. The most commonly used approaches for stereo compression are based on disparity compensation techniques. The basic principle involved in this technique first consists of estimating the disparity map. Then, one image is considered as a reference and the other is predicted in order to generate a residual image. In this paper, we propose a novel approach, based on vector lifting schemes (VLS), which offers the advantage of generating two compact multiresolution representations of the left and the right views. We present two versions of this new scheme. A theoretical analysis of the performance of the considered VLS is also conducted. Experimental results indicate a significant improvement using the proposed structures compared with conventional methods.

  6. Fractal analysis of the ischemic transition region in chronic ischemic heart disease using magnetic resonance imaging.

    PubMed

    Michallek, Florian; Dewey, Marc

    2017-04-01

    To introduce a novel hypothesis and method to characterise pathomechanisms underlying myocardial ischemia in chronic ischemic heart disease by local fractal analysis (FA) of the ischemic myocardial transition region in perfusion imaging. Vascular mechanisms to compensate ischemia are regulated at various vascular scales with their superimposed perfusion pattern being hypothetically self-similar. Dedicated FA software ("FraktalWandler") has been developed. Fractal dimensions during first-pass (FDfirst-pass) and recirculation (FDrecirculation) are hypothesised to indicate the predominating pathomechanism and ischemic severity, respectively. Twenty-six patients with evidence of myocardial ischemia in 108 ischemic myocardial segments on magnetic resonance imaging (MRI) were analysed. The 40th and 60th percentiles of FDfirst-pass were used for pathomechanical classification, assigning lesions with FDfirst-pass ≤ 2.335 to predominating coronary microvascular dysfunction (CMD) and ≥2.387 to predominating coronary artery disease (CAD). Optimal classification point in ROC analysis was FDfirst-pass = 2.358. FDrecirculation correlated moderately with per cent diameter stenosis in invasive coronary angiography in lesions classified CAD (r = 0.472, p = 0.001) but not CMD (r = 0.082, p = 0.600). The ischemic transition region may provide information on pathomechanical composition and severity of myocardial ischemia. FA of this region is feasible and may improve diagnosis compared to traditional noninvasive myocardial perfusion analysis. • A novel hypothesis and method is introduced to pathophysiologically characterise myocardial ischemia. • The ischemic transition region appears a meaningful diagnostic target in perfusion imaging. • Fractal analysis may characterise pathomechanical composition and severity of myocardial ischemia.

  7. Coded Access Optical Sensor (CAOS) Imager

    NASA Astrophysics Data System (ADS)

    Riza, N. A.; Amin, M. J.; La Torre, J. P.

    2015-04-01

    High spatial resolution, low inter-pixel crosstalk, high signal-to-noise ratio (SNR), adequate application dependent speed, economical and energy efficient design are common goals sought after for optical image sensors. In optical microscopy, overcoming the diffraction limit in spatial resolution has been achieved using materials chemistry, optimal wavelengths, precision optics and nanomotion-mechanics for pixel-by-pixel scanning. Imagers based on pixelated imaging devices such as CCD/CMOS sensors avoid pixel-by-pixel scanning as all sensor pixels operate in parallel, but these imagers are fundamentally limited by inter-pixel crosstalk, in particular with interspersed bright and dim light zones. In this paper, we propose an agile pixel imager sensor design platform called Coded Access Optical Sensor (CAOS) that can greatly alleviate the mentioned fundamental limitations, empowering smart optical imaging for particular environments. Specifically, this novel CAOS imager engages an application dependent electronically programmable agile pixel platform using hybrid space-time-frequency coded multiple-access of the sampled optical irradiance map. We demonstrate the foundational working principles of the first experimental electronically programmable CAOS imager using hybrid time-frequency multiple access sampling of a known high contrast laser beam irradiance test map, with the CAOS instrument based on a Texas Instruments (TI) Digital Micromirror Device (DMD). This CAOS instrument provides imaging data that exhibits 77 dB electrical SNR and the measured laser beam image irradiance specifications closely match (i.e., within 0.75% error) the laser manufacturer provided beam image irradiance radius numbers. The proposed CAOS imager can be deployed in many scientific and non-scientific applications where pixel agility via electronic programmability can pull out desired features in an irradiance map subject to the CAOS imaging operation.

  8. Classification of vertebral compression fractures in magnetic resonance images using spectral and fractal analysis.

    PubMed

    Azevedo-Marques, P M; Spagnoli, H F; Frighetto-Pereira, L; Menezes-Reis, R; Metzner, G A; Rangayyan, R M; Nogueira-Barbosa, M H

    2015-08-01

    Fractures with partial collapse of vertebral bodies are generically referred to as "vertebral compression fractures" or VCFs. VCFs can have different etiologies comprising trauma, bone failure related to osteoporosis, or metastatic cancer affecting bone. VCFs related to osteoporosis (benign fractures) and to cancer (malignant fractures) are commonly found in the elderly population. In the clinical setting, the differentiation between benign and malignant fractures is complex and difficult. This paper presents a study aimed at developing a system for computer-aided diagnosis to help in the differentiation between malignant and benign VCFs in magnetic resonance imaging (MRI). We used T1-weighted MRI of the lumbar spine in the sagittal plane. Images from 47 consecutive patients (31 women, 16 men, mean age 63 years) were studied, including 19 malignant fractures and 54 benign fractures. Spectral and fractal features were extracted from manually segmented images of 73 vertebral bodies with VCFs. The classification of malignant vs. benign VCFs was performed using the k-nearest neighbor classifier with the Euclidean distance. Results obtained show that combinations of features derived from Fourier and wavelet transforms, together with the fractal dimension, were able to obtain correct classification rate up to 94.7% with area under the receiver operating characteristic curve up to 0.95.

  9. Fractal dimension of sparkles in automotive metallic coatings by multispectral imaging measurements.

    PubMed

    Medina, José M; Díaz, José A; Vignolo, Carlos

    2014-07-23

    Sparkle in surface coatings is a property of mirror-like pigment particles that consists of remarkable bright spots over a darker surround under unidirectional illumination. We developed a novel nondestructive method to characterize sparkles based on the multispectral imaging technique, and we focused on automotive metallic coatings containing aluminum flake pigments. Multispectral imaging was done in the visible spectrum at different illumination angles around the test sample. Reflectance spectra at different spatial positions were mapped to color coordinates and visualized in different color spaces. Spectral analysis shows that sparkles exhibit higher reflectance spectra and narrower bandwidths. Colorimetric analysis indicates that sparkles present higher lightness values and are far apart from the bulk of color coordinates spanned by the surround. A box-counting procedure was applied to examine the fractal organization of color coordinates in the CIE 1976 L*a*b* color space. A characteristic noninteger exponent was found at each illumination position. The exponent was independent of the illuminant spectra. Together, these results demonstrate that sparkles are extreme deviations relative to the surround and that their spectral properties can be described as fractal patterns within the color space. Multispectral reflectance imaging provides a powerful, noninvasive method for spectral identification and classification of sparkles from metal flake pigments on the micron scale.

  10. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  11. Coded-aperture imaging in nuclear medicine

    NASA Technical Reports Server (NTRS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-01-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  12. Coded-aperture imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-11-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  13. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  14. Hierarchical morphological segmentation for image sequence coding.

    PubMed

    Salembier, P; Pardas, M

    1994-01-01

    This paper deals with a hierarchical morphological segmentation algorithm for image sequence coding. Mathematical morphology is very attractive for this purpose because it efficiently deals with geometrical features such as size, shape, contrast, or connectivity that can be considered as segmentation-oriented features. The algorithm follows a top-down procedure. It first takes into account the global information and produces a coarse segmentation, that is, with a small number of regions. Then, the segmentation quality is improved by introducing regions corresponding to more local information. The algorithm, considering sequences as being functions on a 3-D space, directly segments 3-D regions. A 3-D approach is used to get a segmentation that is stable in time and to directly solve the region correspondence problem. Each segmentation stage relies on four basic steps: simplification, marker extraction, decision, and quality estimation. The simplification removes information from the sequence to make it easier to segment. Morphological filters based on partial reconstruction are proven to be very efficient for this purpose, especially in the case of sequences. The marker extraction identifies the presence of homogeneous 3-D regions. It is based on constrained flat region labeling and morphological contrast extraction. The goal of the decision is to precisely locate the contours of regions detected by the marker extraction. This decision is performed by a modified watershed algorithm. Finally, the quality estimation concentrates on the coding residue, all the information about the 3-D regions that have not been properly segmented and therefore coded. The procedure allows the introduction of the texture and contour coding schemes within the segmentation algorithm. The coding residue is transmitted to the next segmentation stage to improve the segmentation and coding quality. Finally, segmentation and coding examples are presented to show the validity and interest of

  15. Angle closure glaucoma detection using fractal dimension index on SS-OCT images.

    PubMed

    Ni, Soe Ni; Marzilianol, Pina; Wong, Hon-Tym

    2014-01-01

    Optical coherence tomography (OCT) is a high resolution, rapid and non-invasive screening tool for angle closure glaucoma. In this paper, we propose a new strategy for automatic and landmark invariant quantification of the anterior chamber angle of the eye using swept source optical coherence tomography (SS-OCT) images. Seven hundred and eight swept source optical coherence tomography SS-OCT images from 148 patients with average age of (59.48 ± 8.97) were analyzed in this study. The angle structure is measured by fractal dimension (FD) analysis to quantify the complexity or changes of angle recess. We evaluated the FD index with biometric parameters for classification of open angle and angle closure glaucoma. The proposed fractal dimension index gives a better representation of the angle configuration for capturing the nature of the angle dynamics involved in different forms of open and closed angle glaucoma (average FD (standard deviation): 1.944 (0.045) for open and 1.894 (0.043) for closed angle). It showed that the proposed approach has promising potential to become a computer aided diagnostic tool for angle closure glaucoma (ACG) disease.

  16. Typhoon center location algorithm based on fractal feature and gradient of infrared satellite cloud image

    NASA Astrophysics Data System (ADS)

    Zhang, Changjiang; Chen, Yuan; Lu, Juan

    2014-11-01

    An efficient algorithm for typhoon center location is proposed using fractal feature and gradient of infrared satellite cloud image. The centers are generally located in this region for a typhoon except the latter disappearing typhoon. The characteristics of dense cloud region are smoother texture and higher gray values than those of marginal clouds. So the window analysis method is used to select an appropriate cloud region. The window whose difference value between the sum of the gray-gradient co-occurrence matrix and fractal dimension is the biggest is chosen as the dense cloud region. The temperature gradient of the region, which is near typhoon center except typhoon eye, is small. Thus the gradient information is strengthened and is calculated by canny operator. Then we use a window to traverse the dense cloud region. If there is a closed curve, the region of curve is considered as the typhoon center region. Otherwise, the region in which there is the most texture intersection and the biggest density is considered as the typhoon center region. Finally, the geometric center of the center region is determined as the typhoon center location. The effectiveness is test by Chinese FY-2C stationary satellite cloud image. And the result is compared with the typhoon center location in the "tropical cyclone yearbook" which was compiled by Shanghai typhoon institute of China meteorological administration. Experimental results show that the high location accuracy can be obtained.

  17. Multitemporal and Multiscaled Fractal Analysis of Landsat Satellite Data Using the Image Characterization and Modeling System (ICAMS)

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.

    1997-01-01

    The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.

  18. Multitemporal and Multiscaled Fractal Analysis of Landsat Satellite Data Using the Image Characterization and Modeling System (ICAMS)

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.

    1997-01-01

    The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.

  19. Coded excitation for ultrasound tissue harmonic imaging.

    PubMed

    Song, Jaehee; Kim, Sangwon; Sohn, Hak-Yeol; Song, Tai-Kyong; Yoo, Yang Mo

    2010-05-01

    Coded excitation can improve the signal-to-noise ratio (SNR) in ultrasound tissue harmonic imaging (THI). However, it could suffer from the increased sidelobe artifact caused by incomplete pulse compression due to the spectral overlap between the fundamental and harmonic components of ultrasound signal after nonlinear propagation in tissues. In this paper, three coded tissue harmonic imaging (CTHI) techniques based on bandpass filtering, power modulation and pulse inversion (i.e., CTHI-BF, CTHI-PM, and CTHI-PI) were evaluated by measuring the peak range sidelobe level (PRSL) with varying frequency bandwidths. From simulation and in vitro studies, the CTHI-PI outperforms the CTHI-BF and CTHI-PM methods in terms of the PRSL, e.g., -43.5dB vs. -24.8dB and -23.0dB, respectively. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Fitting coding scheme for image wavelet representation

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur

    1998-10-01

    Efficient coding scheme for image wavelet representation in lossy compression scheme is presented. Spatial-frequency hierarchical structure of quantized coefficient and their statistics is analyzed to reduce any redundancy. We applied context-based linear magnitude predictor to fit 1st order conditional probability model in arithmetic coding of significant coefficients to local data characteristics and eliminate spatial and inter-scale dependencies. Sign information is also encoded by inter and intra-band prediction and entropy coding of prediction errors. But main feature of our algorithm deals with encoding way of zerotree structures. Additional symbol of zerotree root is included into magnitude data stream. Moreover, four neighbor zerotree roots with significant parent node are included in extended high-order context model of zerotrees. This significant parent is signed as significant zerotree root and information about these roots distribution is coded separately. The efficiency of presented coding scheme was tested in dyadic wavelet decomposition scheme with two quantization procedures. Simple scalar uniform quantizer and more complex space-frequency quantizer with adaptive data thresholding were used. The final results seem to be promising and competitive across the most effective wavelet compression methods.

  1. Dim target detection in IR image sequences based on fractal and rough set theory

    NASA Astrophysics Data System (ADS)

    Yan, Xiaoke; Shi, Caicheng; He, Peikun

    2005-11-01

    This paper addresses the problem of detecting small, moving, low amplitude in image sequences that also contain moving nuisance objects and background noise. Rough sets (RS) theory is applied in similarity relation instead of equivalence relation to solve clustering issue. We propose fractal-based texture analysis to describe texture coarseness and locally adaptive threshold technique to seek latent object point. Finally, according to temporal and spatial correlations between different frames, the singular points can be filtered. We demonstrate the effectiveness of the technique by applying it to real infrared image sequences containing targets of opportunity and evolving cloud clutter. The experimental results show that the algorithm can effectively increase detection probability and has robustness.

  2. High resolution remote sensing image segmentation based on graph theory and fractal net evolution approach

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Li, H. T.; Han, Y. S.; Gu, H. Y.

    2015-06-01

    Image segmentation is the foundation of further object-oriented image analysis, understanding and recognition. It is one of the key technologies in high resolution remote sensing applications. In this paper, a new fast image segmentation algorithm for high resolution remote sensing imagery is proposed, which is based on graph theory and fractal net evolution approach (FNEA). Firstly, an image is modelled as a weighted undirected graph, where nodes correspond to pixels, and edges connect adjacent pixels. An initial object layer can be obtained efficiently from graph-based segmentation, which runs in time nearly linear in the number of image pixels. Then FNEA starts with the initial object layer and a pairwise merge of its neighbour object with the aim to minimize the resulting summed heterogeneity. Furthermore, according to the character of different features in high resolution remote sensing image, three different merging criterions for image objects based on spectral and spatial information are adopted. Finally, compared with the commercial remote sensing software eCognition, the experimental results demonstrate that the efficiency of the algorithm has significantly improved, and the result can maintain good feature boundaries.

  3. Biomaterial porosity determined by fractal dimensions, succolarity and lacunarity on microcomputed tomographic images.

    PubMed

    N'Diaye, Mambaye; Degeratu, Cristinel; Bouler, Jean-Michel; Chappard, Daniel

    2013-05-01

    Porous structures are becoming more and more important in biology and material science because they help in reducing the density of the grafted material. For biomaterials, porosity also increases the accessibility of cells and vessels inside the grafted area. However, descriptors of porosity are scanty. We have used a series of biomaterials with different types of porosity (created by various porogens: fibers, beads …). Blocks were studied by microcomputed tomography for the measurement of 3D porosity. 2D sections were re-sliced to analyze the microarchitecture of the pores and were transferred to image analysis programs: star volumes, interconnectivity index, Minkowski-Bouligand and Kolmogorov fractal dimensions were determined. Lacunarity and succolarity, two recently described fractal dimensions, were also computed. These parameters provided a precise description of porosity and pores' characteristics. Non-linear relationships were found between several descriptors e.g. succolarity and star volume of the material. A linear correlation was found between lacunarity and succolarity. These techniques appear suitable in the study of biomaterials usable as bone substitutes. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. MORPH-II, a software package for the analysis of scanning-electron-micrograph images for the assessment of the fractal dimension of exposed stone surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf

    2000-01-01

    Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.

  5. Fast-neutron, coded-aperture imager

    NASA Astrophysics Data System (ADS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  6. Fractal Bread.

    ERIC Educational Resources Information Center

    Esbenshade, Donald H., Jr.

    1991-01-01

    Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)

  7. Fractal Bread.

    ERIC Educational Resources Information Center

    Esbenshade, Donald H., Jr.

    1991-01-01

    Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)

  8. Color image coding for digital projection and d-cinema

    NASA Astrophysics Data System (ADS)

    LeHoty, David A.

    2008-01-01

    Color image coding for d-cinema is explored using luminance contour plots of a few standards, a laser-based projection system, and a film stock. The luminance contour plots are in u'v' space. Several color image coding representations are surveyed. And a color image coding system is suggested for efficient perceptual color difference encoding.

  9. Wavelength-coded volume holographic imaging endoscope for multidepth imaging.

    PubMed

    Howlett, Isela D; Han, Wanglei; Rice, Photini; Barton, Jennifer K; Kostuk, Raymond K

    2017-10-01

    A wavelength-coded volume holographic imaging (WC-VHI) endoscope system capable of simultaneous multifocal imaging is presented. The system images light from two depths separated by 100  μm in a tissue sample by using axial chromatic dispersion of a gradient index probe in combination with two light-emitting diode sources and a multiplexed volume hologram to separate the images. This system is different from previous VHI systems in that it uses planar multiplexed gratings and does not require curved holographic gratings. This results in improved lateral imaging resolution from 228.1 to 322.5  lp/mm. This letter describes the design and fabrication of the WC-VHI endoscope and experimental images of hard and soft resolution targets and biological tissue samples to illustrate the performance properties. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. Dual-sided coded-aperture imager

    DOEpatents

    Ziock, Klaus-Peter

    2009-09-22

    In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.

  11. Channel error recovery for transform image coding

    NASA Astrophysics Data System (ADS)

    Mitchell, O. R.; Tabatabai, A. J.

    1981-12-01

    A method is presented to automatically inspect the block boundaries of a reconstructed two-dimensional transform coded image, to locate blocks which are most likely to contain errors, to approximate the size and type of error in the block, and to eliminate this estimated error from the picture. This method uses redundancy in the source data to provide channel error correction. No additional channel error protection bits or changes to the transmitter are required. It can be used when channel errors are unexpected prior to reception.

  12. Discriminant Kernel Assignment for Image Coding.

    PubMed

    Deng, Yue; Zhao, Yanyu; Ren, Zhiquan; Kong, Youyong; Bao, Feng; Dai, Qionghai

    2017-06-01

    This paper proposes discriminant kernel assignment (DKA) in the bag-of-features framework for image representation. DKA slightly modifies existing kernel assignment to learn width-variant Gaussian kernel functions to perform discriminant local feature assignment. When directly applying gradient-descent method to solve DKA, the optimization may contain multiple time-consuming reassignment implementations in iterations. Accordingly, we introduce a more practical way to locally linearize the DKA objective and the difficult task is cast as a sequence of easier ones. Since DKA only focuses on the feature assignment part, it seamlessly collaborates with other discriminative learning approaches, e.g., discriminant dictionary learning or multiple kernel learning, for even better performances. Experimental evaluations on multiple benchmark datasets verify that DKA outperforms other image assignment approaches and exhibits significant efficiency in feature coding.

  13. Featured Image: Tests of an MHD Code

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-09-01

    Creating the codes that are used to numerically model astrophysical systems takes a lot of work and a lot of testing! A new, publicly available moving-mesh magnetohydrodynamics (MHD) code, DISCO, is designed to model 2D and 3D orbital fluid motion, such as that of astrophysical disks. In a recent article, DISCO creator Paul Duffell (University of California, Berkeley) presents the code and the outcomes from a series of standard tests of DISCOs stability, accuracy, and scalability.From left to right and top to bottom, the test outputs shown above are: a cylindrical Kelvin-Helmholtz flow (showing off DISCOs numerical grid in 2D), a passive scalar in a smooth vortex (can DISCO maintain contact discontinuities?), a global look at the cylindrical Kelvin-Helmholtz flow, a Jupiter-mass planet opening a gap in a viscous disk, an MHD flywheel (a test of DISCOs stability), an MHD explosion revealing shock structures, an MHD rotor (a more challenging version of the explosion), a Flock 3D MRI test (can DISCO study linear growth of the magnetorotational instability in disks?), and a nonlinear 3D MRI test.Check out the gif below for a closer look at each of these images, or follow the link to the original article to see even more!CitationPaul C. Duffell 2016 ApJS 226 2. doi:10.3847/0067-0049/226/1/2

  14. Cell type classifiers for breast cancer microscopic images based on fractal dimension texture analysis of image color layers.

    PubMed

    Jitaree, Sirinapa; Phinyomark, Angkoon; Boonyaphiphat, Pleumjit; Phukpattaranont, Pornchai

    2015-01-01

    Having a classifier of cell types in a breast cancer microscopic image (BCMI), obtained with immunohistochemical staining, is required as part of a computer-aided system that counts the cancer cells in such BCMI. Such quantitation by cell counting is very useful in supporting decisions and planning of the medical treatment of breast cancer. This study proposes and evaluates features based on texture analysis by fractal dimension (FD), for the classification of histological structures in a BCMI into either cancer cells or non-cancer cells. The cancer cells include positive cells (PC) and negative cells (NC), while the normal cells comprise stromal cells (SC) and lymphocyte cells (LC). The FD feature values were calculated with the box-counting method from binarized images, obtained by automatic thresholding with Otsu's method of the grayscale images for various color channels. A total of 12 color channels from four color spaces (RGB, CIE-L*a*b*, HSV, and YCbCr) were investigated, and the FD feature values from them were used with decision tree classifiers. The BCMI data consisted of 1,400, 1,200, and 800 images with pixel resolutions 128 × 128, 192 × 192, and 256 × 256, respectively. The best cross-validated classification accuracy was 93.87%, for distinguishing between cancer and non-cancer cells, obtained using the Cr color channel with window size 256. The results indicate that the proposed algorithm, based on fractal dimension features extracted from a color channel, performs well in the automatic classification of the histology in a BCMI. This might support accurate automatic cell counting in a computer-assisted system for breast cancer diagnosis.

  15. Diagnostics of hemangioma by the methods of correlation and fractal analysis of laser microscopic images of blood plasma

    NASA Astrophysics Data System (ADS)

    Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.

    2011-09-01

    For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.

  16. Diagnostics of hemangioma by the methods of correlation and fractal analysis of laser microscopic images of blood plasma

    NASA Astrophysics Data System (ADS)

    Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.

    2012-01-01

    For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.

  17. Aging adult skull vaults by applying the concept of fractal geometry to high-resolution computed tomography images.

    PubMed

    Obert, Martin; Seyfried, Maren; Schumacher, Falk; Krombach, Gabriele A; Verhoff, Marcel A

    2014-09-01

    Aging human remains is a critical issue in anthropology and forensic medicine, and the search for accurate, new age-estimation methods is ongoing. In our study, we, therefore, explored a new approach to investigate a possible correlation between age-at-death (aad) and geometric irregularities in the bone structure of human skull caps. We applied the concept of fractal geometry and fractal dimension D analysis to describe heterogeneity within the bone structure. A high-resolution flat-panel computed tomography scanner (eXplore Locus Ultra) was used to obtain 229,500 images from 221 male and 120 female (total 341) European human skulls. Automated image analysis software was developed to evaluate the fractal dimension D, using the mass radius method. The frontal and the occipital portions of the skull caps of adult females and males were investigated separately. The age dependence of the fractal dimension D was studied by correlation analysis, and the prediction accuracy of age-at-death (aad) estimates for individual observations was calculated. D values for human skull caps scatter strongly as a function of age. We found sex-dependent correlation coefficients (CC) between D and age for adults (females CC=-0.67; males CC=-0.05). Prediction errors for aad estimates for individual observations were in the range of ±18 years at a 75% confidence interval. The detailed quantitative description of age-dependent irregularities in the bone microarchitecture of skull vaults through fractal dimension analysis does not, as we had hoped, enable a new aging method. Severe scattering of the data leads to an estimation error that is too great for this method to be of practical relevance in aad estimates. Thus, we disclosed an interesting sex difference. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. The IRMA code for unique classification of medical images

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.; Schubert, Henning; Keysers, Daniel; Kohnen, Michael; Wein, Berthold B.

    2003-05-01

    Modern communication standards such as Digital Imaging and Communication in Medicine (DICOM) include non-image data for a standardized description of study, patient, or technical parameters. However, these tags are rather roughly structured, ambiguous, and often optional. In this paper, we present a mono-hierarchical multi-axial classification code for medical images and emphasize its advantages for content-based image retrieval in medical applications (IRMA). Our so called IRMA coding system consists of four axes with three to four positions, each in {0,...9,a,...,z}, where "0" denotes "unspecified" to determine the end of a path along an axis. In particular, the technical code (T) describes the imaging modality; the directional code (D) models body orientations; the anatomical code (A) refers to the body region examined; and the biological code (B) describes the biological system examined. Hence, the entire code results in a character string of not more than 13 characters (IRMA: TTTT - DDD - AAA - BBB). The code can be easily extended by introducing characters in certain code positions, e.g., if new modalities are introduced. In contrast to other approaches, mixtures of one- and two-literal code positions are avoided which simplifies automatic code processing. Furthermore, the IRMA code obviates ambiguities resulting from overlapping code elements within the same level. Although this code was originally designed to be used in the IRMA project, other use of it is welcome.

  19. Coherent diffractive imaging using randomly coded masks

    DOE PAGES

    Seaberg, Matthew H.; d'Aspremont, Alexandre; Turner, Joshua J.

    2015-12-07

    We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. As a result, the experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-raymore » synchrotron and even free electron laser experiments.« less

  20. Coherent diffractive imaging using randomly coded masks

    SciTech Connect

    Seaberg, Matthew H.; D'Aspremont, Alexandre; Turner, Joshua J.

    2015-12-07

    We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even free electron laser experiments.

  1. Coding depth perception from image defocus.

    PubMed

    Supèr, Hans; Romeo, August

    2014-12-01

    As a result of the spider experiments in Nagata et al. (2012), it was hypothesized that the depth perception mechanisms of these animals should be based on how much images are defocused. In the present paper, assuming that relative chromatic aberrations or blur radii values are known, we develop a formulation relating the values of these cues to the actual depth distance. Taking into account the form of the resulting signals, we propose the use of latency coding from a spiking neuron obeying Izhikevich's 'simple model'. If spider jumps can be viewed as approximately parabolic, some estimates allow for a sensory-motor relation between the time to the first spike and the magnitude of the initial velocity of the jump.

  2. Method of optical image coding by time integration

    NASA Astrophysics Data System (ADS)

    Evtikhiev, Nikolay N.; Starikov, Sergey N.; Cheryomkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.

    2012-06-01

    Method of optical image coding by time integration is proposed. Coding in proposed method is accomplished by shifting object image over photosensor area of digital camera during registration. It results in optically calculated convolution of original image with shifts trajectory. As opposed to optical coding methods based on the use of diffractive optical elements the described coding method is feasible for implementation in totally incoherent light. The method was preliminary tested by using LC monitor for image displaying and shifting. Shifting of object image is realized by displaying video consisting of frames with image to be encoded at different locations on screen of LC monitor while registering it by camera. Optical encoding and numerical decoding of test images were performed successfully. Also more practical experimental implementation of the method with use of LCOS SLM Holoeye PLUTO VIS was realized. Objects images to be encoded were formed in monochromatic spatially incoherent light. Shifting of object image over camera photosensor area was accomplished by displaying video consisting of frames with blazed gratings on LCOS SLM. Each blazed grating deflects reflecting from SLM light at different angle. Results of image optical coding and encoded images numerical restoration are presented. Obtained experimental results are compared with results of numerical modeling. Optical image coding with time integration could be used for accessible quality estimation of optical image coding using diffractive optical elements or as independent optical coding method which can be implemented in incoherent light.

  3. Parallel blind deconvolution of astronomical images based on the fractal energy ratio of the image and regularization of the point spread function

    NASA Astrophysics Data System (ADS)

    Jia, Peng; Cai, Dongmei; Wang, Dong

    2014-11-01

    A parallel blind deconvolution algorithm is presented. The algorithm contains the constraints of the point spread function (PSF) derived from the physical process of the imaging. Additionally, in order to obtain an effective restored image, the fractal energy ratio is used as an evaluation criterion to estimate the quality of the image. This algorithm is fine-grained parallelized to increase the calculation speed. Results of numerical experiments and real experiments indicate that this algorithm is effective.

  4. Design of Pel Adaptive DPCM coding based upon image partition

    NASA Astrophysics Data System (ADS)

    Saitoh, T.; Harashima, H.; Miyakawa, H.

    1982-01-01

    A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).

  5. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  6. A formal link of anticipatory mental imaging with fractal features of biological time

    NASA Astrophysics Data System (ADS)

    Bounias, Michel; Bonaly, André

    2001-06-01

    Previous works have supported the proposition that biological organisms are endowed with perceptive functions based of fixed points in mental chaining sequences (Bounias and Bonaly, 1997). Former conjectures proposed that memory could be fractal (Dubois, 1990; Bonaly, 1989, 1994) and that the biological time, standing at the intersection of the arrows of past events and of future events, exhibit some similarity with the construction of a Koch-like structure (Bonaly, 2000). A formal examination of the biological system of perception let now appear that the perception of time occurs at the intersection of two consecutive fixed point sequences. Therefore, time-flow is mapped by sequences of fixed points of which each is the convergence value of sequences of neuronal configurations. Since the latter are indexed by the ordered sequences of closed Poincaré sections determining the physical arrow of time (Bonaly and Bounias, 1995), there exists a surjective Lipschitz-Hölder mapping of physical time onto system-perceived time. The succession of consecutive fixed points of the perceptive sequence in turn constitute a sequence whose properties account for the apparent continuity of time-perception, in the same time as they fulfill the basically nonlinearity of time as a general parameter. A generator polygon is shown to be constituted by four sides: (i) the interval between two consecutive bursts of perceptions provides the base, with a projection paralleling the arrow of physical time; (ii) the top is constituted by the sequence of repeated fixed points accounting for the mental image got from the first burst, and further maintained up to the formation of the next fixed point; (iii) the first lateral side is the difference (Lk-1 to Lk) between the measure (L) of neuronal chains leading to the first image a(uk), and: (iv) the second lateral side is the difference of measure between Lk and Lk+1. The equation of the system is therefore of the incursive type, and this

  7. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  8. Two Fibonacci P-code based image scrambling algorithms

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Agaian, Sos; Joyner, Valencia M.; Panetta, Karen

    2008-02-01

    Image scrambling is used to make images visually unrecognizable such that unauthorized users have difficulty decoding the scrambled image to access the original image. This article presents two new image scrambling algorithms based on Fibonacci p-code, a parametric sequence. The first algorithm works in spatial domain and the second in frequency domain (including JPEG domain). A parameter, p, is used as a security-key and has many possible choices to guarantee the high security of the scrambled images. The presented algorithms can be implemented for encoding/decoding both in full and partial image scrambling, and can be used in real-time applications, such as image data hiding and encryption. Examples of image scrambling are provided. Computer simulations are shown to demonstrate that the presented methods also have good performance in common image attacks such as cutting (data loss), compression and noise. The new scrambling methods can be implemented on grey level images and 3-color components in color images. A new Lucas p-code is also introduced. The scrambling images based on Fibonacci p-code are also compared to the scrambling results of classic Fibonacci number and Lucas p-code. This will demonstrate that the classical Fibonacci number is a special sequence of Fibonacci p-code and show the different scrambling results of Fibonacci p-code and Lucas p-code.

  9. PynPoint code for exoplanet imaging

    NASA Astrophysics Data System (ADS)

    Amara, A.; Quanz, S. P.; Akeret, J.

    2015-04-01

    We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).

  10. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight.

    PubMed

    Loh, N D; Hampton, C Y; Martin, A V; Starodub, D; Sierra, R G; Barty, A; Aquila, A; Schulz, J; Lomb, L; Steinbrener, J; Shoeman, R L; Kassemeyer, S; Bostedt, C; Bozek, J; Epp, S W; Erk, B; Hartmann, R; Rolles, D; Rudenko, A; Rudek, B; Foucar, L; Kimmel, N; Weidenspointner, G; Hauser, G; Holl, P; Pedersoli, E; Liang, M; Hunter, M S; Hunter, M M; Gumprecht, L; Coppola, N; Wunderer, C; Graafsma, H; Maia, F R N C; Ekeberg, T; Hantke, M; Fleckenstein, H; Hirsemann, H; Nass, K; White, T A; Tobias, H J; Farquar, G R; Benner, W H; Hau-Riege, S P; Reich, C; Hartmann, A; Soltau, H; Marchesini, S; Bajt, S; Barthelmess, M; Bucksbaum, P; Hodgson, K O; Strüder, L; Ullrich, J; Frank, M; Schlichting, I; Chapman, H N; Bogan, M J

    2012-06-27

    The morphology of micrometre-size particulate matter is of critical importance in fields ranging from toxicology to climate science, yet these properties are surprisingly difficult to measure in the particles' native environment. Electron microscopy requires collection of particles on a substrate; visible light scattering provides insufficient resolution; and X-ray synchrotron studies have been limited to ensembles of particles. Here we demonstrate an in situ method for imaging individual sub-micrometre particles to nanometre resolution in their native environment, using intense, coherent X-ray pulses from the Linac Coherent Light Source free-electron laser. We introduced individual aerosol particles into the pulsed X-ray beam, which is sufficiently intense that diffraction from individual particles can be measured for morphological analysis. At the same time, ion fragments ejected from the beam were analysed using mass spectrometry, to determine the composition of single aerosol particles. Our results show the extent of internal dilation symmetry of individual soot particles subject to non-equilibrium aggregation, and the surprisingly large variability in their fractal dimensions. More broadly, our methods can be extended to resolve both static and dynamic morphology of general ensembles of disordered particles. Such general morphology has implications in topics such as solvent accessibilities in proteins, vibrational energy transfer by the hydrodynamic interaction of amino acids, and large-scale production of nanoscale structures by flame synthesis.

  11. Recent advances in CZT strip detectors and coded mask imagers

    NASA Astrophysics Data System (ADS)

    Matteson, J. L.; Gruber, D. E.; Heindl, W. A.; Pelling, M. R.; Peterson, L. E.; Rothschild, R. E.; Skelton, R. T.; Hink, P. L.; Slavis, K. R.; Binns, W. R.; Tumer, T.; Visser, G.

    1999-09-01

    The UCSD, WU, UCR and Nova collaboration has made significant progress on the necessary techniques for coded mask imaging of gamma-ray bursts: position sensitive CZT detectors with good energy resolution, ASIC readout, coded mask imaging, and background properties at balloon altitudes. Results on coded mask imaging techniques appropriate for wide field imaging and localization of gamma-ray bursts are presented, including a shadowgram and deconvolved image taken with a prototype detector/ASIC and MURA mask. This research was supported by NASA Grants NAG5-5111, NAG5-5114, and NGT5-50170.

  12. Analysis of the fractal dimension of volcano geomorphology through Synthetic Aperture Radar (SAR) amplitude images acquired in C and X band.

    NASA Astrophysics Data System (ADS)

    Pepe, S.; Di Martino, G.; Iodice, A.; Manzo, M.; Pepe, A.; Riccio, D.; Ruello, G.; Sansosti, E.; Tizzani, P.; Zinno, I.

    2012-04-01

    In the last two decades several aspects relevant to volcanic activity have been analyzed in terms of fractal parameters that effectively describe natural objects geometry. More specifically, these researches have been aimed at the identification of (1) the power laws that governed the magma fragmentation processes, (2) the energy of explosive eruptions, and (3) the distribution of the associated earthquakes. In this paper, the study of volcano morphology via satellite images is dealt with; in particular, we use the complete forward model developed by some of the authors (Di Martino et al., 2012) that links the stochastic characterization of amplitude Synthetic Aperture Radar (SAR) images to the fractal dimension of the imaged surfaces, modelled via fractional Brownian motion (fBm) processes. Based on the inversion of such a model, a SAR image post-processing has been implemented (Di Martino et al., 2010), that allows retrieving the fractal dimension of the observed surfaces, dictating the distribution of the roughness over different spatial scales. The fractal dimension of volcanic structures has been related to the specific nature of materials and to the effects of active geodynamic processes. Hence, the possibility to estimate the fractal dimension from a single amplitude-only SAR image is of fundamental importance for the characterization of volcano structures and, moreover, can be very helpful for monitoring and crisis management activities in case of eruptions and other similar natural hazards. The implemented SAR image processing performs the extraction of the point-by-point fractal dimension of the scene observed by the sensor, providing - as an output product - the map of the fractal dimension of the area of interest. In this work, such an analysis is performed on Cosmo-SkyMed, ERS-1/2 and ENVISAT images relevant to active stratovolcanoes in different geodynamic contexts, such as Mt. Somma-Vesuvio, Mt. Etna, Vulcano and Stromboli in Southern Italy, Shinmoe

  13. FRACTAL DIMENSION OF GALAXY ISOPHOTES

    SciTech Connect

    Thanki, Sandip; Rhee, George; Lepp, Stephen E-mail: grhee@physics.unlv.edu

    2009-09-15

    In this paper we investigate the use of the fractal dimension of galaxy isophotes in galaxy classification. We have applied two different methods for determining fractal dimensions to the isophotes of elliptical and spiral galaxies derived from CCD images. We conclude that fractal dimension alone is not a reliable tool but that combined with other parameters in a neural net algorithm the fractal dimension could be of use. In particular, we have used three parameters to segregate the ellipticals and lenticulars from the spiral galaxies in our sample. These three parameters are the correlation fractal dimension D {sub corr}, the difference between the correlation fractal dimension and the capacity fractal dimension D {sub corr} - D {sub cap}, and, thirdly, the B - V color of the galaxy.

  14. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  15. Coded excitation for diverging wave cardiac imaging: a feasibility study

    NASA Astrophysics Data System (ADS)

    Zhao, Feifei; Tong, Ling; He, Qiong; Luo, Jianwen

    2017-02-01

    Diverging wave (DW) based cardiac imaging has gained increasing interest in recent years given its capacity to achieve ultrahigh frame rate. However, the signal-to-noise ratio (SNR), contrast, and penetration depth of the resulting B-mode images are typically low as DWs spread energy over a large region. Coded excitation is known to be capable of increasing the SNR and penetration for ultrasound imaging. The aim of this study was therefore to test the feasibility of applying coded excitation in DW imaging to improve the corresponding SNR, contrast and penetration depth. To this end, two types of codes, i.e. a linear frequency modulated chirp code and a set of complementary Golay codes were tested in three different DW imaging schemes, i.e. 1 angle DW transmit without compounding, 3 and 5 angles DW transmits with coherent compounding. The performances (SNR, contrast ratio (CR), contrast-to-noise ratio (CNR), and penetration) of different imaging schemes were investigated by means of simulations and in vitro experiments. As for benchmark, corresponding DW imaging schemes with regular pulsed excitation as well as the conventional focused imaging scheme were also included. The results showed that the SNR was improved by about 10 dB using coded excitation while the penetration depth was increased by 2.5 cm and 1.8 cm using chirp code and Golay codes, respectively. The CNR and CR gains varied with the depth for different DW schemes using coded excitations. Specifically, for non-compounded DW imaging schemes, the gain in the CR was about 5 dB and 3 dB while the gain in the CNR was about 4.5 dB and 3.5 dB at larger depths using chirp code and Golay codes, respectively. For compounded imaging schemes, using coded excitation, the gain in the penetration and contrast were relatively smaller compared to non-compounded ones. Overall, these findings indicated the feasibility of coded excitation in improving the image quality of DW imaging. Preliminary in vivo cardiac images

  16. Coded excitation plane wave imaging for shear wave motion detection.

    PubMed

    Song, Pengfei; Urban, Matthew W; Manduca, Armando; Greenleaf, James F; Chen, Shigao

    2015-07-01

    Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave SNR compared with conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2 to 4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (body mass index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue.

  17. Fractals in biology and medicine

    NASA Technical Reports Server (NTRS)

    Havlin, S.; Buldyrev, S. V.; Goldberger, A. L.; Mantegna, R. N.; Ossadnik, S. M.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    Our purpose is to describe some recent progress in applying fractal concepts to systems of relevance to biology and medicine. We review several biological systems characterized by fractal geometry, with a particular focus on the long-range power-law correlations found recently in DNA sequences containing noncoding material. Furthermore, we discuss the finding that the exponent alpha quantifying these long-range correlations ("fractal complexity") is smaller for coding than for noncoding sequences. We also discuss the application of fractal scaling analysis to the dynamics of heartbeat regulation, and report the recent finding that the normal heart is characterized by long-range "anticorrelations" which are absent in the diseased heart.

  18. Fractals in biology and medicine

    NASA Technical Reports Server (NTRS)

    Havlin, S.; Buldyrev, S. V.; Goldberger, A. L.; Mantegna, R. N.; Ossadnik, S. M.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    Our purpose is to describe some recent progress in applying fractal concepts to systems of relevance to biology and medicine. We review several biological systems characterized by fractal geometry, with a particular focus on the long-range power-law correlations found recently in DNA sequences containing noncoding material. Furthermore, we discuss the finding that the exponent alpha quantifying these long-range correlations ("fractal complexity") is smaller for coding than for noncoding sequences. We also discuss the application of fractal scaling analysis to the dynamics of heartbeat regulation, and report the recent finding that the normal heart is characterized by long-range "anticorrelations" which are absent in the diseased heart.

  19. Relationship between necrotic patterns in glioblastoma and patient survival: fractal dimension and lacunarity analyses using magnetic resonance imaging.

    PubMed

    Liu, Shuai; Wang, Yinyan; Xu, Kaibin; Wang, Zheng; Fan, Xing; Zhang, Chuanbao; Li, Shaowu; Qiu, Xiaoguang; Jiang, Tao

    2017-08-16

    Necrosis is a hallmark feature of glioblastoma (GBM). This study investigated the prognostic role of necrotic patterns in GBM using fractal dimension (FD) and lacunarity analyses of magnetic resonance imaging (MRI) data and evaluated the role of lacunarity in the biological processes leading to necrosis. We retrospectively reviewed clinical and MRI data of 95 patients with GBM. FD and lacunarity of the necrosis on MRI were calculated by fractal analysis and subjected to survival analysis. We also performed gene ontology analysis in 32 patients with available RNA-seq data. Univariate analysis revealed that FD < 1.56 and lacunarity > 0.46 significantly correlated with poor progression-free survival (p = 0.006 and p = 0.012, respectively) and overall survival (p = 0.008 and p = 0.005, respectively). Multivariate analysis revealed that both parameters were independent factors for unfavorable progression-free survival (p = 0.001 and p = 0.015, respectively) and overall survival (p = 0.002 and p = 0.007, respectively). Gene ontology analysis revealed that genes positively correlated with lacunarity were involved in the suppression of apoptosis and necrosis-associated biological processes. We demonstrate that the fractal parameters of necrosis in GBM can predict patient survival and are associated with the biological processes of tumor necrosis.

  20. Efficient image compression scheme based on differential coding

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Wang, Guoyou; Liu, Ying

    2007-11-01

    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  1. Rank minimization code aperture design for spectrally selective compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2013-03-01

    A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.

  2. Fractal Geometry of Rocks

    SciTech Connect

    Radlinski, A.P.; Radlinska, E.Z.; Agamalian, M.; Wignall, G.D.; Lindner, P.; Randl, O.G.

    1999-04-01

    The analysis of small- and ultra-small-angle neutron scattering data for sedimentary rocks shows that the pore-rock fabric interface is a surface fractal (D{sub s}=2.82) over 3 orders of magnitude of the length scale and 10 orders of magnitude in intensity. The fractal dimension and scatterer size obtained from scanning electron microscopy image processing are consistent with neutron scattering data. {copyright} {ital 1999} {ital The American Physical Society}

  3. A Double-Minded Fractal

    ERIC Educational Resources Information Center

    Simoson, Andrew J.

    2009-01-01

    This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)

  4. A Double-Minded Fractal

    ERIC Educational Resources Information Center

    Simoson, Andrew J.

    2009-01-01

    This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)

  5. Coded aperture imaging in X- and gamma-ray astronomy

    NASA Astrophysics Data System (ADS)

    Caroli, E.; Stephen, J. B.; Di Cocco, G.; Natalucci, L.; Spizzichino, A.

    1987-09-01

    The principles and design considerations of coded aperture imaging systems for high energy astronomy in the X-ray to gamma-ray range are reviewed. A mask consisting of an array of opaque and transparent elements set between the source fluxes and a position-sensitive detection plane is employed for image formation. Direct deconvolution techniques are discussed. Coded aperture mask designs considered include Fresnel zone plates, random pinhole masks, nonredundant and uniformly redundant arrays, and pseudonoise product masks. Various astronomical applications of coded aperture imaging systems are discussed, including the University of Birmingham coded mask X-ray telescope, the Energetic X-ray Imaging and Timing Experiment intended for the XTE satellite, and the ZEBRA coded mask telescope.

  6. Practicing the Code of Ethics, finding the image of God.

    PubMed

    Hoglund, Barbara A

    2013-01-01

    The Code of Ethics for Nurses gives a professional obligation to practice in a compassionate and respectful way that is unaffected by the attributes of the patient. This article explores the concept "made in the image of God" and the complexities inherent in caring for those perceived as exhibiting distorted images of God. While the Code provides a professional standard consistent with a biblical worldview, human nature impacts the ability to consistently act congruently with the Code. Strategies and nursing interventions that support development of practice from a biblical worldview and the Code of Ethics for Nurses are presented.

  7. Interactive QR code beautification with full background image embedding

    NASA Astrophysics Data System (ADS)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  8. Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications

    SciTech Connect

    Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth

    2013-06-01

    Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.

  9. MR image compression using a wavelet transform coding algorithm.

    PubMed

    Angelidis, P A

    1994-01-01

    We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.

  10. Progressive Image Coding by Hierarchical Linear Approximation.

    ERIC Educational Resources Information Center

    Wu, Xiaolin; Fang, Yonggang

    1994-01-01

    Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…

  11. The fractal menger sponge and Sierpinski carpet as models for reservoir rock/pore systems: I. ; Theory and image analysis of Sierpinski carpets

    SciTech Connect

    Garrison, J.R., Jr.; Pearn, W.C.; von Rosenberg, D. W. )

    1992-01-01

    In this paper reservoir rock/pore systems are considered natural fractal objects and modeled as and compared to the regular fractal Menger Sponge and Sierpinski Carpet. The physical properties of a porous rock are, in part, controlled by the geometry of the pore system. The rate at which a fluid or electrical current can travel through the pore system of a rock is controlled by the path along which it must travel. This path is a subset of the overall geometry of the pore system. Reservoir rocks exhibit self-similarity over a range of length scales suggesting that fractal geometry offers a means of characterizing these complex objects. The overall geometry of a rock/pore system can be described, conveniently and concisely, in terms of effective fractal dimensions. The rock/pore system is modeled as the fractal Menger Sponge. A cross section through the rock/pore system, such as an image of a thin-section of a rock, is modeled as the fractal Sierpinski Carpet, which is equivalent to the face of the Menger Sponge.

  12. Hybrid coded aperture and Compton imaging using an active mask

    NASA Astrophysics Data System (ADS)

    Schultz, L. J.; Wallace, M. S.; Galassi, M. C.; Hoover, A. S.; Mocko, M.; Palmer, D. M.; Tornga, S. R.; Kippen, R. M.; Hynes, M. V.; Toolin, M. J.; Harris, B.; McElroy, J. E.; Wakeford, D.; Lanza, R. C.; Horn, B. K. P.; Wehe, D. K.

    2009-09-01

    The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.

  13. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-04-01

    Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.

  14. Perceptually lossless coding of digital monochrome ultrasound images

    NASA Astrophysics Data System (ADS)

    Wu, David; Tan, Damian M.; Griffiths, Tania; Wu, Hong Ren

    2005-07-01

    A preliminary investigation of encoding monochrome ultrasound images with a novel perceptually lossless coder is presented. Based on the JPEG 2000 coding framework, the proposed coder employs a vision model to identify and remove visually insignificant/irrelevant information. Current simulation results have shown coding performance gains over the JPEG compliant LOCO lossless and JPEG 2000 lossless coders without any perceivable distortion.

  15. Low Rank Sparse Coding for Image Classification

    DTIC Science & Technology

    2013-12-08

    0.8) with macrofeatures and cross-validation. In addition, NBNN (5 desc) [3] (42.0) and Todorovic et al. [32] (49.5) show much better performance due...regularized coding for scene categorization. In CVPR, 2012. [32] S. Todorovic and N. Ahuja. Learning subcategory relevances for category recog

  16. Fractal astronomy.

    NASA Astrophysics Data System (ADS)

    Beech, M.

    1989-02-01

    The author discusses some of the more recent research on fractal astronomy and results presented in several astronomical studies. First, the large-scale structure of the universe is considered, while in another section one drops in scale to examine some of the smallest bodies in our solar system; the comets and meteoroids. The final section presents some thoughts on what influence the fractal ideology might have on astronomy, focusing particularly on the question recently raised by Kadanoff, "Fractals: where's the physics?"

  17. Spatially-varying IIR filter banks for image coding

    NASA Technical Reports Server (NTRS)

    Chung, Wilson C.; Smith, Mark J. T.

    1992-01-01

    This paper reports on the application of spatially variant infinite impulse response (IIR) filter banks to subband image coding. The new filter bank is based on computationally efficient recursive polyphase decompositions that dynamically change in response to the input signal. In the absence of quantization, reconstruction can be made exact. However, by proper choice of an adaptation scheme, we show that subband image coding based on time varying filter banks can yield improvement over the use of conventional filter banks.

  18. Multichannel Linear Predictive Coding of Color Images,

    DTIC Science & Technology

    1984-01-01

    single- An alternative may of =oeling z(n,n) wmul output AniM , as described in 11,21, at me be to autoregressively model each channel average...being minimum shoulders Image with well definte tao.r&. The phase, where*6* dt-* d .~ ,s #% terminenst of a binary image of Fig. 2(d). howver. rinws

  19. Target Detection Using Fractal Geometry

    NASA Technical Reports Server (NTRS)

    Fuller, J. Joseph

    1991-01-01

    The concepts and theory of fractal geometry were applied to the problem of segmenting a 256 x 256 pixel image so that manmade objects could be extracted from natural backgrounds. The two most important measurements necessary to extract these manmade objects were fractal dimension and lacunarity. Provision was made to pass the manmade portion to a lookup table for subsequent identification. A computer program was written to construct cloud backgrounds of fractal dimensions which were allowed to vary between 2.2 and 2.8. Images of three model space targets were combined with these backgrounds to provide a data set for testing the validity of the approach. Once the data set was constructed, computer programs were written to extract estimates of the fractal dimension and lacunarity on 4 x 4 pixel subsets of the image. It was shown that for clouds of fractal dimension 2.7 or less, appropriate thresholding on fractal dimension and lacunarity yielded a 64 x 64 edge-detected image with all or most of the cloud background removed. These images were enhanced by an erosion and dilation to provide the final image passed to the lookup table. While the ultimate goal was to pass the final image to a neural network for identification, this work shows the applicability of fractal geometry to the problems of image segmentation, edge detection and separating a target of interest from a natural background.

  20. The transience of virtual fractals.

    PubMed

    Taylor, R P

    2012-01-01

    Artists have a long and fruitful tradition of exploiting electronic media to convert static images into dynamic images that evolve with time. Fractal patterns serve as an example: computers allow the observer to zoom in on virtual images and so experience the endless repetition of patterns in a matter that cannot be matched using static images. This year's featured cover artist, Susan Lowedermilk, instead plans to employ persistence of human vision to bring virtual fractals to life. This will be done by incorporating her prints of fractal patterns into zoetropes and phenakistoscopes.

  1. Coded Excitation Plane Wave Imaging for Shear Wave Motion Detection

    PubMed Central

    Song, Pengfei; Urban, Matthew W.; Manduca, Armando; Greenleaf, James F.; Chen, Shigao

    2015-01-01

    Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave signal-to-noise-ratio (SNR) compared to conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2-4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (Body Mass Index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue. PMID:26168181

  2. Decomposition of the optical transfer function: wavefront coding imaging systems

    NASA Astrophysics Data System (ADS)

    Muyo, Gonzalo; Harvey, Andy R.

    2005-10-01

    We describe the mapping of the optical transfer function (OTF) of an incoherent imaging system into a geometrical representation. We show that for defocused traditional and wavefront-coded systems the OTF can be represented as a generalized Cornu spiral. This representation provides a physical insight into the way in which wavefront coding can increase the depth of field of an imaging system and permits analytical quantification of salient OTF parameters, such as the depth of focus, the location of nulls, and amplitude and phase modulation of the wavefront-coding OTF.

  3. JPEG backward compatible coding of omnidirectional images

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Upenik, Evgeniy; Ebrahimi, Touradj

    2016-09-01

    Omnidirectional image and video, also known as 360 image and 360 video, are gaining in popularity with the recent growth in availability of cameras and displays that can cope with such type of content. As omnidirectional visual content represents a larger set of information about the scene, it typically requires a much larger volume of information. Efficient compression of such content is therefore important. In this paper, we review the state of the art in compression of omnidirectional visual content, and propose a novel approach to encode omnidirectional images in such a way that they are still viewable on legacy JPEG decoders.

  4. Code-modulated interferometric imaging system using phased arrays

    NASA Astrophysics Data System (ADS)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  5. Aggregating local image descriptors into compact codes.

    PubMed

    Jégou, Hervé; Perronnin, Florent; Douze, Matthijs; Sánchez, Jorge; Pérez, Patrick; Schmid, Cordelia

    2012-09-01

    This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.

  6. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  7. Efficient coding of wavelet trees and its applications in image coding

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Yang, En-hui; Tewfik, Ahmed H.; Kieffer, John C.

    1996-02-01

    We propose in this paper a novel lossless tree coding algorithm. The technique is a direct extension of the bisection method, the simplest case of the complexity reduction method proposed recently by Kieffer and Yang, that has been used for lossless data string coding. A reduction rule is used to obtain the irreducible representation of a tree, and this irreducible tree is entropy-coded instead of the input tree itself. This reduction is reversible, and the original tree can be fully recovered from its irreducible representation. More specifically, we search for equivalent subtrees from top to bottom. When equivalent subtrees are found, a special symbol is appended to the value of the root node of the first equivalent subtree, and the root node of the second subtree is assigned to the index which points to the first subtree, an all other nodes in the second subtrees are removed. This procedure is repeated until it cannot be reduced further. This yields the irreducible tree or irreducible representation of the original tree. The proposed method can effectively remove the redundancy in an image, and results in more efficient compression. It is proved that when the tree size approaches infinity, the proposed method offers the optimal compression performance. It is generally more efficient in practice than direct coding of the input tree. The proposed method can be directly applied to code wavelet trees in non-iterative wavelet-based image coding schemes. A modified method is also proposed for coding wavelet zerotrees in embedded zerotree wavelet (EZW) image coding. Although its coding efficiency is slightly reduced, the modified version maintains exact control of bit rate and the scalability of the bit stream in EZW coding.

  8. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  9. Infrared imaging with a wavefront-coded singlet lens.

    PubMed

    Muyo, Gonzalo; Singh, Amritpal; Andersson, Mathias; Huckridge, David; Wood, Andrew; Harvey, Andrew R

    2009-11-09

    We describe the use of wavefront coding for the mitigation of optical aberrations in a thermal imaging system. Diffraction-limited imaging is demonstrated with a simple singlet which enables an approximate halving in length and mass of the optical system compared to an equivalent two-element lens.

  10. Measuring Fractality

    PubMed Central

    Stadnitski, Tatjana

    2012-01-01

    When investigating fractal phenomena, the following questions are fundamental for the applied researcher: (1) What are essential statistical properties of 1/f noise? (2) Which estimators are available for measuring fractality? (3) Which measurement instruments are appropriate and how are they applied? The purpose of this article is to give clear and comprehensible answers to these questions. First, theoretical characteristics of a fractal pattern (self-similarity, long memory, power law) and the related fractal parameters (the Hurst coefficient, the scaling exponent α, the fractional differencing parameter d of the autoregressive fractionally integrated moving average methodology, the power exponent β of the spectral analysis) are discussed. Then, estimators of fractal parameters from different software packages commonly used by applied researchers (R, SAS, SPSS) are introduced and evaluated. Advantages, disadvantages, and constrains of the popular estimators (d^ML, power spectral density, detrended fluctuation analysis, signal summation conversion) are illustrated by elaborate examples. Finally, crucial steps of fractal analysis (plotting time series data, autocorrelation, and spectral functions; performing stationarity tests; choosing an adequate estimator; estimating fractal parameters; distinguishing fractal processes from short-memory patterns) are demonstrated with empirical time series. PMID:22586408

  11. Exploring Fractals.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1991-01-01

    Explores the subject of fractal geometry focusing on the occurrence of fractal-like shapes in the natural world. Topics include iterated functions, chaos theory, the Lorenz attractor, logistic maps, the Mandelbrot set, and mini-Mandelbrot sets. Provides appropriate computer algorithms, as well as further sources of information. (JJK)

  12. Fractal Math.

    ERIC Educational Resources Information Center

    Gray, Shirley B.

    1992-01-01

    This article traces the historical development of fractal geometry from early in the twentieth century and offers an explanation of the mathematics behind the recursion formulas and their representations within computer graphics. Also included are the fundamentals behind programing for fractal graphics in the C Language with appropriate…

  13. Exploring Fractals.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1991-01-01

    Explores the subject of fractal geometry focusing on the occurrence of fractal-like shapes in the natural world. Topics include iterated functions, chaos theory, the Lorenz attractor, logistic maps, the Mandelbrot set, and mini-Mandelbrot sets. Provides appropriate computer algorithms, as well as further sources of information. (JJK)

  14. Measuring fractality.

    PubMed

    Stadnitski, Tatjana

    2012-01-01

    WHEN INVESTIGATING FRACTAL PHENOMENA, THE FOLLOWING QUESTIONS ARE FUNDAMENTAL FOR THE APPLIED RESEARCHER: (1) What are essential statistical properties of 1/f noise? (2) Which estimators are available for measuring fractality? (3) Which measurement instruments are appropriate and how are they applied? The purpose of this article is to give clear and comprehensible answers to these questions. First, theoretical characteristics of a fractal pattern (self-similarity, long memory, power law) and the related fractal parameters (the Hurst coefficient, the scaling exponent α, the fractional differencing parameter d of the autoregressive fractionally integrated moving average methodology, the power exponent β of the spectral analysis) are discussed. Then, estimators of fractal parameters from different software packages commonly used by applied researchers (R, SAS, SPSS) are introduced and evaluated. Advantages, disadvantages, and constrains of the popular estimators ([Formula: see text] power spectral density, detrended fluctuation analysis, signal summation conversion) are illustrated by elaborate examples. Finally, crucial steps of fractal analysis (plotting time series data, autocorrelation, and spectral functions; performing stationarity tests; choosing an adequate estimator; estimating fractal parameters; distinguishing fractal processes from short-memory patterns) are demonstrated with empirical time series.

  15. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  16. Imaging The Genetic Code of a Virus

    NASA Astrophysics Data System (ADS)

    Graham, Jenna; Link, Justin

    2013-03-01

    Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University

  17. Coded aperture imaging for fluorescent x-rays

    SciTech Connect

    Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.

    2014-06-15

    We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.

  18. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  19. Results from the coded aperture neutron imaging system.

    SciTech Connect

    Brubaker, Erik; Steele, John T.; Brennan, James S.; Marleau, Peter

    2010-11-01

    Because of their penetrating power, energetic neutrons and gamma rays ({approx}1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging - a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.

  20. Results from the Coded Aperture Neutron Imaging System (CANIS).

    SciTech Connect

    Brubaker, Erik; Steele, John T.; Brennan, James S.; Hilton, Nathan R.; Marleau, Peter

    2010-11-01

    Because of their penetrating power, energetic neutrons and gamma rays ({approx}1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging- a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.

  1. Vector coding of wavelet-transformed images

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua

    1998-09-01

    Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.

  2. Compressive imaging using fast transform coding

    NASA Astrophysics Data System (ADS)

    Thompson, Andrew; Calderbank, Robert

    2016-10-01

    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  3. Coded aperture imaging with uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    A system utilizing uniformly redundant arrays to image non-focusable radiation. The uniformly redundant array is used in conjunction with a balanced correlation technique to provide a system with no artifacts such that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. Additionally, the array is mosaicked to reduce required detector size over conventional array detectors.

  4. Coded aperture imaging with uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1982-01-01

    A system utilizing uniformly redundant arrays to image non-focusable radiation. The uniformly redundant array is used in conjunction with a balanced correlation technique to provide a system with no artifacts such that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. Additionally, the array is mosaicked to reduce required detector size over conventional array detectors.

  5. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  6. Coded aperture design in mismatched compressive spectral imaging.

    PubMed

    Galvis, Laura; Arguello, Henry; Arce, Gonzalo R

    2015-11-20

    Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique.

  7. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  8. IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.

    SciTech Connect

    VANIER,P.E.

    2003-08-03

    A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.

  9. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  10. Code aperture optimization for spectrally agile compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  11. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  12. Exact mapping from singular value spectrum of a class of fractal images to entanglement spectrum of one-dimensional free fermions

    SciTech Connect

    Matsueda, Hiroaki; Lee, Ching Hua

    2015-03-10

    We examine singular value spectrum of a class of two-dimensional fractal images. We find that the spectra can be mapped onto entanglement spectra of free fermions in one dimension. This exact mapping tells us that the singular value decomposition is a way of detecting a holographic relation between classical and quantum systems.

  13. Image coding by way of wavelets

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1993-01-01

    The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.

  14. Analysis of LAPAN-IPB image lossless compression using differential pulse code modulation and huffman coding

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Permala, R.

    2017-01-01

    LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.

  15. Fractal analysis for assessing tumour grade in microscopic images of breast tissue

    NASA Astrophysics Data System (ADS)

    Tambasco, Mauro; Costello, Meghan; Newcomb, Chris; Magliocco, Anthony M.

    2007-03-01

    In 2006, breast cancer is expected to continue as the leading form of cancer diagnosed in women, and the second leading cause of cancer mortality in this group. A method that has proven useful for guiding the choice of treatment strategy is the assessment of histological tumor grade. The grading is based upon the mitosis count, nuclear pleomorphism, and tubular formation, and is known to be subject to inter-observer variability. Since cancer grade is one of the most significant predictors of prognosis, errors in grading can affect patient management and outcome. Hence, there is a need to develop a breast cancer-grading tool that is minimally operator dependent to reduce variability associated with the current grading system, and thereby reduce uncertainty that may impact patient outcome. In this work, we explored the potential of a computer-based approach using fractal analysis as a quantitative measure of cancer grade for breast specimens. More specifically, we developed and optimized computational tools to compute the fractal dimension of low- versus high-grade breast sections and found them to be significantly different, 1.3+/-0.10 versus 1.49+/-0.10, respectively (Kolmogorov-Smirnov test, p<0.001). These results indicate that fractal dimension (a measure of morphologic complexity) may be a useful tool for demarcating low- versus high-grade cancer specimens, and has potential as an objective measure of breast cancer grade. Such prognostic value could provide more sensitive and specific information that would reduce inter-observer variability by aiding the pathologist in grading cancers.

  16. Efficient block error concealment code for image and video transmission

    NASA Astrophysics Data System (ADS)

    Min, Jungki; Chan, Andrew K.

    1999-05-01

    Image and video compression standards such as JPEG, MPEG, H.263 are highly sensitive to error during transmission. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization produces the worst image degradation. Even an error of a single bit in block synchronization may result in data to be placed in wrong positions that is caused by spatial shifts. Our proposed efficient block error concealment code (EBECC) virtually guarantees block synchronization and it improves coding efficiency by several hundred folds over the error resilient entropy code (EREC), proposed by N. G. Kingsbury and D. W. Redmill, depending on the image format and size. In addition, the EBECC produces slightly better resolution on the reconstructed images or video frames than those from the EREC. Another important advantage of the EBECC is that it does not require redundancy contrasting to the EREC that requires 2-3 percent of redundancy. Our preliminary results show the EBECC is 240 times faster than EREC for encoding and 330 times for decoding based on the CIF format of H.263 video coding standard. The EBECC can be used on most of the popular image and video compression schemes such as JPEG, MPEG, and H.263. Additionally, it is especially useful to wireless networks in which the percentage of image and video data is high.

  17. Barker-coded excitation in ophthalmological ultrasound imaging

    PubMed Central

    Zhou, Sheng; Wang, Xiao-Chun; Yang, Jun; Ji, Jian-Jun; Wang, Yan-Qun

    2014-01-01

    High-frequency ultrasound is an attractive means to obtain fine-resolution images of biological tissues for ophthalmologic imaging. To solve the tradeoff between axial resolution and detection depth, existing in the conventional single-pulse excitation, this study develops a new method which uses 13-bit Barker-coded excitation and a mismatched filter for high-frequency ophthalmologic imaging. A novel imaging platform has been designed after trying out various encoding methods. The simulation and experiment result show that the mismatched filter can achieve a much higher out signal main to side lobe which is 9.7 times of the matched one. The coded excitation method has significant advantages over the single-pulse excitation system in terms of a lower MI, a higher resolution, and a deeper detection depth, which improve the quality of ophthalmic tissue imaging. Therefore, this method has great values in scientific application and medical market. PMID:25356093

  18. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  19. Resolution scalable image coding with reversible cellular automata.

    PubMed

    Cappellari, Lorenzo; Milani, Simone; Cruz-Reyes, Carlos; Calvagno, Giancarlo

    2011-05-01

    In a resolution scalable image coding algorithm, a multiresolution representation of the data is often obtained using a linear filter bank. Reversible cellular automata have been recently proposed as simpler, nonlinear filter banks that produce a similar representation. The original image is decomposed into four subbands, such that one of them retains most of the features of the original image at a reduced scale. In this paper, we discuss the utilization of reversible cellular automata and arithmetic coding for scalable compression of binary and grayscale images. In the binary case, the proposed algorithm that uses simple local rules compares well with the JBIG compression standard, in particular for images where the foreground is made of a simple connected region. For complex images, more efficient local rules based upon the lifting principle have been designed. They provide compression performances very close to or even better than JBIG, depending upon the image characteristics. In the grayscale case, and in particular for smooth images such as depth maps, the proposed algorithm outperforms both the JBIG and the JPEG2000 standards under most coding conditions.

  20. Coded access optical sensor (CAOS) imager and applications

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2016-04-01

    Starting in 2001, we proposed and extensively demonstrated (using a DMD: Digital Micromirror Device) an agile pixel Spatial Light Modulator (SLM)-based optical imager based on single pixel photo-detection (also called a single pixel camera) that is suited for operations with both coherent and incoherent light across broad spectral bands. This imager design operates with the agile pixels programmed in a limited SNR operations starring time-multiplexed mode where acquisition of image irradiance (i.e., intensity) data is done one agile pixel at a time across the SLM plane where the incident image radiation is present. Motivated by modern day advances in RF wireless, optical wired communications and electronic signal processing technologies and using our prior-art SLM-based optical imager design, described using a surprisingly simple approach is a new imager design called Coded Access Optical Sensor (CAOS) that has the ability to alleviate some of the key prior imager fundamental limitations. The agile pixel in the CAOS imager can operate in different time-frequency coding modes like Frequency Division Multiple Access (FDMA), Code-Division Multiple Access (CDMA), and Time Division Multiple Access (TDMA). Data from a first CAOS camera demonstration is described along with novel designs of CAOS-based optical instruments for various applications.

  1. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  2. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  3. [Quantitative analysis method based on fractal theory for medical imaging of normal brain development in infants].

    PubMed

    Li, Heheng; Luo, Liangping; Huang, Li

    2011-02-01

    The present paper is aimed to study the fractal spectrum of the cerebral computerized tomography in 158 normal infants of different age groups, based on the calculation of chaotic theory. The distribution range of neonatal period was 1.88-1.90 (mean = 1.8913 +/- 0.0064); It reached a stable condition at the level of 1.89-1.90 during 1-12 months old (mean = 1.8927 +/- 0.0045); The normal range of 1-2 years old infants was 1.86-1.90 (mean = 1.8863 +/- 4 0.0085); It kept the invariance of the quantitative value among 1.88-1.91(mean = 1.8958 +/- 0.0083) during 2-3 years of age. ANOVA indicated there's no significant difference between boys and girls (F = 0.243, P > 0.05), but the difference of age groups was significant (F = 8.947, P < 0.001). The fractal dimension of cerebral computerized tomography in normal infants computed by box methods was maintained at an efficient stability from 1.86 to 1.91. It indicated that there exit some attractor modes in pediatric brain development.

  4. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  5. New coding concept for fast ultrasound imaging using pulse trains

    NASA Astrophysics Data System (ADS)

    Misaridis, Thanasis; Jensen, Joergen A.

    2002-04-01

    Frame rate in ultrasound imaging can be increased by simultaneous transmission of multiple beams using coded waveforms. However, the achievable degree of orthogonality among coded waveforms is limited in ultrasound, and the image quality degrades unacceptably due to interbeam interference. In this paper, an alternative combined time-space coding approach is undertaken. In the new method all transducer elements are excited with short pulses and the high time-bandwidth (TB) product waveforms are generated acoustically. Each element transmits a short pulse spherical wave with a constant transmit delay from element to element, long enough to assure no pulse overlapping for all depths in the image. Frequency shift keying is used for per element coding. The received signals from a point scatterer are staggered pulse trains which are beamformed for all beam directions and further processed with a bank of matched filters (one for each beam direction). Filtering compresses the pulse train to a single pulse at the scatterer position with a number of spike axial sidelobes. Cancellation of the ambiguity spikes is done by applying additional phase modulation from one emission to the next and summing every two successive images. Simulation results presented for QLFM and Costas spatial encoding schemes show that the proposed method can yield images with range sidelobes down to -45 dB using only two emissions.

  6. Pyramidal Image-Processing Code For Hexagonal Grid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1990-01-01

    Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.

  7. Pyramidal Image-Processing Code For Hexagonal Grid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1990-01-01

    Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.

  8. Medical image registration using sparse coding of image patches.

    PubMed

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing.

  9. A novel fractal monocular and stereo video codec with object-based functionality

    NASA Astrophysics Data System (ADS)

    Zhu, Shiping; Li, Liyun; Wang, Zaikuo

    2012-12-01

    Based on the classical fractal video compression method, an improved monocular fractal compression method is proposed which includes using more effective macroblock partition scheme instead of classical quadtree partition scheme; using improved fast motion estimation to increase the calculation speed; using homo-I-frame like in H.264, etc. The monocular codec uses the motion compensated prediction (MCP) structure. And stereo fractal video coding is proposed which matches the macroblock with two reference frames in left and right views, and it results in increasing compression ratio and reducing bit rate/bandwidth when transmitting compressed video data. The stereo codec combines MCP and disparity compensated prediction. And a new method of object-based fractal video coding is proposed in which each object can be encoded and decoded independently with higher compression ratio and speed and less bit rate/bandwidth when transmitting compressed stereo video data greatly. Experimental results indicate that the proposed monocular method can raise compression ratio 3.6 to 7.5 times, speed up compression time 5.3 to 22.3 times, and improve the image quality 3.81 to 9.24 dB in comparison with circular prediction mapping and non-contractive interframe mapping. The PSNR of the proposed stereo video coding is about 0.17 dB higher than that of the proposed monocular video coding, and 0.69 dB higher than that of JMVC 4.0 on average. Comparing with the bit rate resulted by the proposed monocular video coding and JMVC 4.0, the proposed stereo video coding achieves, on average, 2.53 and 21.14 Kbps bit rate saving, respectively. The proposed object-based fractal monocular and stereo video coding methods are simple and effective, and they make the applications of fractal monocular and stereo video coding more flexible and practicable.

  10. Feature coding in image classification: a comprehensive study.

    PubMed

    Huang, Yongzhen; Wu, Zifeng; Wang, Liang; Tan, Tieniu

    2014-03-01

    Image classification is a hot topic in computer vision and pattern recognition. Feature coding, as a key component of image classification, has been widely studied over the past several years, and a number of coding algorithms have been proposed. However, there is no comprehensive study concerning the connections between different coding methods, especially how they have evolved. In this paper, we first make a survey on various feature coding methods, including their motivations and mathematical representations, and then exploit their relations, based on which a taxonomy is proposed to reveal their evolution. Further, we summarize the main characteristics of current algorithms, each of which is shared by several coding strategies. Finally, we choose several representatives from different kinds of coding approaches and empirically evaluate them with respect to the size of the codebook and the number of training samples on several widely used databases (15-Scenes, Caltech-256, PASCAL VOC07, and SUN397). Experimental findings firmly justify our theoretical analysis, which is expected to benefit both practical applications and future research.

  11. Coded-Aperture Compton Camera for Gamma-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.; Williams, John G.

    2016-02-01

    A novel gamma-ray imaging system is demonstrated, by means of Monte Carlo simulation. Previous designs have used either a coded aperture or Compton scattering system to image a gamma-ray source. By taking advantage of characteristics of each of these systems a new design can be implemented that does not require a pixelated stopping detector. Use of the system is illustrated for a simulated radiation survey in a decontamination and decommissioning operation.

  12. Contour fractal analysis of grains

    NASA Astrophysics Data System (ADS)

    Guida, Giulia; Casini, Francesca; Viggiani, Giulia MB

    2017-06-01

    Fractal analysis has been shown to be useful in image processing to characterise the shape and the grey-scale complexity in different applications spanning from electronic to medical engineering (e.g. [1]). Fractal analysis consists of several methods to assign a dimension and other fractal characteristics to a dataset describing geometric objects. Limited studies have been conducted on the application of fractal analysis to the classification of the shape characteristics of soil grains. The main objective of the work described in this paper is to obtain, from the results of systematic fractal analysis of artificial simple shapes, the characterization of the particle morphology at different scales. The long term objective of the research is to link the microscopic features of granular media with the mechanical behaviour observed in the laboratory and in situ.

  13. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  14. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  15. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.

  16. Implementation and operation of three fractal measurement algorithms for analysis of remote-sensing data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, Dale A.; Lam, Nina S.-N.

    1993-01-01

    Fractal geometry is increasingly becoming a useful tool for modeling natural phenomena. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. The purpose of this paper is to introduce and implement three algorithms in C code for deriving fractal measurement from remotely sensed data. These three methods are: the line-divider method, the variogram method, and the triangular prism method. Remote-sensing data acquired by NASA's Calibrated Airborne Multispectral Scanner (CAMS) are used to compute the fractal dimension using each of the three methods. These data were obtained as a 30 m pixel spatial resolution over a portion of western Puerto Rico in January 1990. A description of the three methods, their implementation in PC-compatible environment, and some results of applying these algorithms to remotely sensed image data are presented.

  17. Implementation and operation of three fractal measurement algorithms for analysis of remote-sensing data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, Dale A.; Lam, Nina S.-N.

    1993-01-01

    Fractal geometry is increasingly becoming a useful tool for modeling natural phenomena. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. The purpose of this paper is to introduce and implement three algorithms in C code for deriving fractal measurement from remotely sensed data. These three methods are: the line-divider method, the variogram method, and the triangular prism method. Remote-sensing data acquired by NASA's Calibrated Airborne Multispectral Scanner (CAMS) are used to compute the fractal dimension using each of the three methods. These data were obtained as a 30 m pixel spatial resolution over a portion of western Puerto Rico in January 1990. A description of the three methods, their implementation in PC-compatible environment, and some results of applying these algorithms to remotely sensed image data are presented.

  18. Bayesian based fluorescence coded imaging using quantum dots

    NASA Astrophysics Data System (ADS)

    Nishimura, Takahiro; Kimura, Hitoshi; Ogura, Yusuke; Tanida, Jun

    2017-04-01

    Single-molecule localization techniques are effective to resolve fluorescence images with higher resolution. To increase the frame rate, high-density positions of individual fluorescence emitters should be measured. We are studying a Bayesian-based localization method for measuring high density molecular positions with fluorescence coded images. In this paper, a scheme of several color quantum dots aligned with DNA nanostructures are considered. We confirmed that the proposed method could be applied to fluorescence images of quantum dots experimentally and that the positions of the aligned fluorescence emitters at intervals of 80 nm could be measured with little errors in numerical simulations.

  19. Adaptive directional lifting-based wavelet transform for image coding.

    PubMed

    Ding, Wenpeng; Wu, Feng; Wu, Xiaolin; Li, Shipeng; Li, Houqiang

    2007-02-01

    We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.

  20. Image coding based on energy-sorted wavelet packets

    NASA Astrophysics Data System (ADS)

    Kong, Lin-Wen; Lay, Kuen-Tsair

    1995-04-01

    The discrete wavelet transform performs multiresolution analysis, which effectively decomposes a digital image into components with different degrees of details. In practice, it is usually implemented in the form of filter banks. If the filter banks are cascaded and both the low-pass and the high-pass components are further decomposed, a wavelet packet is obtained. The coefficients of the wavelet packet effectively represent subimages in different resolution levels. In the energy-sorted wavelet- packet decomposition, all subimages in the packet are then sorted according to their energies. The most important subimages, as measured by the energy, are preserved and coded. By investigating the histogram of each subimage, it is found that the pixel values are well modelled by the Laplacian distribution. Therefore, the Laplacian quantization is applied to quantized the subimages. Experimental results show that the image coding scheme based on wavelet packets achieves high compression ratio while preserving satisfactory image quality.

  1. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-10-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  2. Study on scalable coding algorithm for medical image.

    PubMed

    Hongxin, Chen; Zhengguang, Liu; Hongwei, Zhang

    2005-01-01

    According to the characteristics of medical image and wavelet transform, a scalable coding algorithm is presented, which can be used in image transmission by network. Wavelet transform makes up for the weakness of DCT transform and it is similar to the human visual system. The second generation of wavelet transform, the lifting scheme, can be completed by integer form, which is divided into several steps, and they can be realized by calculation form integer to integer. Lifting scheme can simplify the computing process and increase transform precision. According to the property of wavelet sub-bands, wavelet coefficients are organized on the basis of the sequence of their importance, so code stream is formed progressively and it is scalable in resolution. Experimental results show that the algorithm can be used effectively in medical image compression and suitable to long-distance browse.

  3. Wavelet domain textual coding of Ottoman script images

    NASA Astrophysics Data System (ADS)

    Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.

    1996-02-01

    Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.

  4. On the Gray Image of Cyclic Codes over Finite Chain Rings

    NASA Astrophysics Data System (ADS)

    Qian, Jianfa; Ma, Wenping; Wang, Xinmei

    We introduce (1-γ)-cyclic code and cyclic codes over the finite chain ring R. We prove that the Gray image of a linear (1-γ)-cyclic code over R of length n is a distance invariant quasi-cyclic code over Fpk. We also prove that if (n, p)=1, then every code over Fpk which is the Gray image of a cyclic code over R of length n is equivalent to a quasi-cyclic code.

  5. Computer-Aided Image Analysis and Fractal Synthesis in the Quantitative Evaluation of Tumor Aggressiveness in Prostate Carcinomas.

    PubMed

    Waliszewski, Przemyslaw

    2016-01-01

    The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken's embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed

  6. Computer-Aided Image Analysis and Fractal Synthesis in the Quantitative Evaluation of Tumor Aggressiveness in Prostate Carcinomas

    PubMed Central

    Waliszewski, Przemyslaw

    2016-01-01

    The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken’s embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed

  7. Lung cancer-a fractal viewpoint.

    PubMed

    Lennon, Frances E; Cianci, Gianguido C; Cipriani, Nicole A; Hensing, Thomas A; Zhang, Hannah J; Chen, Chin-Tu; Murgu, Septimiu D; Vokes, Everett E; Vannier, Michael W; Salgia, Ravi

    2015-11-01

    Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed.

  8. Lung cancer—a fractal viewpoint

    PubMed Central

    Lennon, Frances E.; Cianci, Gianguido C.; Cipriani, Nicole A.; Hensing, Thomas A.; Zhang, Hannah J.; Chen, Chin-Tu; Murgu, Septimiu D.; Vokes, Everett E.; W. Vannier, Michael; Salgia, Ravi

    2016-01-01

    Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed. PMID:26169924

  9. Comparative noise performance of a coded aperture spectral imager

    NASA Astrophysics Data System (ADS)

    Piper, Jonathan; Yuen, Peter; Godfree, Peter; Ding, Mengjia; Soori, Umair; Selvagumar, Senthurran; James, David

    2016-10-01

    Novel types of spectral sensors using coded apertures may offer various advantages over conventional designs, especially the possibility of compressive measurements that could exceed the expected spatial, temporal or spectral resolution of the system. However, the nature of the measurement process imposes certain limitations, especially on the noise performance of the sensor. This paper considers a particular type of coded-aperture spectral imager and uses analytical and numerical modelling to compare its expected noise performance with conventional hyperspectral sensors. It is shown that conventional sensors may have an advantage in conditions where signal levels are high, such as bright light or slow scanning, but that coded-aperture sensors may be advantageous in low-signal conditions.

  10. Wavelet-based zerotree coding of aerospace images

    NASA Astrophysics Data System (ADS)

    Franques, Victoria T.; Jain, Vijay K.

    1996-06-01

    This paper presents a wavelet based image coding method achieving high levels of compression. A multi-resolution subband decomposition system is constructed using Quadrature Mirror Filters. Symmetric extension and windowing of the multi-scaled subbands are incorporated to minimize the boundary effects. Next, the Embedded Zerotree Wavelet coding algorithm is used for data compression method. Elimination of the isolated zero symbol, for certain subbands, leads to an improved EZW algorithm. Further compression is obtained with an adaptive arithmetic coder. We achieve a PSNR of 26.91 dB at a bit rate of 0.018, 35.59 dB at a bit rate of 0.149, and 43.05 dB at 0.892 bits/pixel for the aerospace image, Refuel.

  11. Improved zerotree coding algorithm for wavelet image compression

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  12. Peak transform for efficient image representation and coding.

    PubMed

    He, Zhihai

    2007-07-01

    In this work, we introduce a nonlinear geometric transform, called peak transform (PT), for efficient image representation and coding. The proposed PT is able to convert high-frequency signals into low-frequency ones, making them much easier to be compressed. Coupled with wavelet transform and subband decomposition, the PT is able to significantly reduce signal energy in high-frequency subbands and achieve a significant transform coding gain. This has important applications in efficient data representation and compression. To maximize the transform coding gain, we develop a dynamic programming solution for optimum PT design. Based on PT, we design an image encoder, called the PT encoder, for efficient image compression. Our extensive experimental results demonstrate that, in wavelet-based subband decomposition, the signal energy in high-frequency subbands can be reduced by up to 60% if a PT is applied. The PT image encoder outperforms state-of-the-art JPEG2000 and H.264 (INTRA) encoders by up to 2-3 dB in peak signal-to-noise ratio (PSNR), especially for images with a significant amount of high-frequency components. Our experimental results also show that the proposed PT is able to efficiently capture and preserve high-frequency image features (e.g., edges) and yields significantly improved visual quality. We believe that the concept explored in this work, designing a nonlinear transform to convert hard-to-compress signals into easy ones, is very useful. We hope this work would motivate more research work along this direction.

  13. 157km BOTDA with pulse coding and image processing

    NASA Astrophysics Data System (ADS)

    Qian, Xianyang; Wang, Zinan; Wang, Song; Xue, Naitian; Sun, Wei; Zhang, Li; Zhang, Bin; Rao, Yunjiang

    2016-05-01

    A repeater-less Brillouin optical time-domain analyzer (BOTDA) with 157.68km sensing range is demonstrated, using the combination of random fiber laser Raman pumping and low-noise laser-diode-Raman pumping. With optical pulse coding (OPC) and Non Local Means (NLM) image processing, temperature sensing with +/-0.70°C uncertainty and 8m spatial resolution is experimentally demonstrated. The image processing approach has been proved to be compatible with OPC, and it further increases the figure-of-merit (FoM) of the system by 57%.

  14. Preconditioning for multiplexed imaging with spatially coded PSFs.

    PubMed

    Horisaki, Ryoichi; Tanida, Jun

    2011-06-20

    We propose a preconditioning method to improve the convergence of iterative reconstruction algorithms in multiplexed imaging based on convolution-based compressive sensing with spatially coded point spread functions (PSFs). The system matrix is converted to improve the condition number with a preconditioner matrix. The preconditioner matrix is calculated by Tikhonov regularization in the frequency domain. The method was demonstrated with simulations and an experiment involving a range detection system with a grating based on the multiplexed imaging framework. The results of the demonstrations showed improved reconstruction fidelity by using the proposed preconditioning method.

  15. Hierarchical image coding with diamond-shaped sub-bands

    NASA Technical Reports Server (NTRS)

    Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken

    1992-01-01

    We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.

  16. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  17. Wavelength Coded Image Transmission and Holographic Optical Elements.

    DTIC Science & Technology

    1984-08-20

    K. Case DAAG 29-81-K-0033 9. PERFORMING ORGANIZATION NAME9 AND ADDRESS 10. PROGRAM ELEMENT. PROJECT . TASW Electrical Engineering Department AE OKUI...Am. 56, 523 of the wavelength-coding proces for image transmission (96. * trouh aerrtin m~ia.6. H. Kogelnik, "Holographic image projection through...andm then convective fog," Opt. Commun. 7,9R (1973). Airyg Rese a tficeAi orcet Ofc eD of d Sciec 12. A. W. Lohmann and H. Schmalfuss, "Holography

  18. Correlated Statistical Uncertainties in Coded-Aperture Imaging

    SciTech Connect

    Fleenor, Matthew C; Blackston, Matthew A; Ziock, Klaus-Peter

    2014-01-01

    In nuclear security applications, coded-aperture imagers provide the opportu- nity for a wealth of information regarding the attributes of both the radioac- tive and non-radioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be deter- mined in a quantitative and statistically meaningful manner. To address the deficiency of quantifiable errors in coded-aperture imaging, we present uncer- tainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti- mask data are included. Utilizing simulated point source data, we found that correlations (and inverse correlations) arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations (and their inverses) was heightened by the process of over-sampling, while correla- tions were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explore how statistics-based alarming in nuclear security is impacted.

  19. Refined codebook for grayscale image coding based on vector quantization

    NASA Astrophysics Data System (ADS)

    Hu, Yu-Chen; Chen, Wu-Lin; Tsai, Pi-Yu

    2015-07-01

    Vector quantization (VQ) is a commonly used technique for image compression. Typically, the common codebooks (CCBs) that are designed by using multiple training images are used in VQ. The CCBs are stored in the public websites such that their storage cost can be omitted. In addition to the CCBs, the private codebooks (PCBs) that are designed by using the image to be compressed can be used in VQ. However, calculating the bit rates (BRs) of VQ includes the storage cost of the PCBs. It is observed that some codewords in the CCB are not used in VQ. The codebook refinement process is designed to generate the refined codebook (RCB) based on the CCB of each image. To cut down the BRs, the lossless index coding process and the two-stage lossless coding process are employed to encode the index table and the RCB, respectively. Experimental results reveal that the proposed scheme (PS) achieves better image qualities than VQ with the CCBs. In addition, the PS requires less BRs than VQ with the PCBs.

  20. Applications of fractals in ecology.

    PubMed

    Sugihara, G; M May, R

    1990-03-01

    Fractal models describe the geometry of a wide variety of natural objects such as coastlines, island chains, coral reefs, satellite ocean-color images and patches of vegetation. Cast in the form of modified diffusion models, they can mimic natural and artificial landscapes having different types of complexity of shape. This article provides a brief introduction to fractals and reports on how they can be used by ecologists to answer a variety of basic questions, about scale, measurement and hierarchy in, ecological systems.

  1. Investigation into How 8th Grade Students Define Fractals

    ERIC Educational Resources Information Center

    Karakus, Fatih

    2015-01-01

    The analysis of 8th grade students' concept definitions and concept images can provide information about their mental schema of fractals. There is limited research on students' understanding and definitions of fractals. Therefore, this study aimed to investigate the elementary students' definitions of fractals based on concept image and concept…

  2. Construction of fractal nanostructures based on Kepler-Shubnikov nets

    SciTech Connect

    Ivanov, V. V. Talanov, V. M.

    2013-05-15

    A system of information codes for deterministic fractal lattices and sets of multifractal curves is proposed. An iterative modular design was used to obtain a series of deterministic fractal lattices with generators in the form of fragments of 2D structures and a series of multifractal curves (based on some Kepler-Shubnikov nets) having Cantor set properties. The main characteristics of fractal structures and their lacunar spectra are determined. A hierarchical principle is formulated for modules of regular fractal structures.

  3. Superharmonic imaging with chirp coded excitation: filtering spectrally overlapped harmonics.

    PubMed

    Harput, Sevan; McLaughlan, James; Cowell, David M J; Freear, Steven

    2014-11-01

    Superharmonic imaging improves the spatial resolution by using the higher order harmonics generated in tissue. The superharmonic component is formed by combining the third, fourth, and fifth harmonics, which have low energy content and therefore poor SNR. This study uses coded excitation to increase the excitation energy. The SNR improvement is achieved on the receiver side by performing pulse compression with harmonic matched filters. The use of coded signals also introduces new filtering capabilities that are not possible with pulsed excitation. This is especially important when using wideband signals. For narrowband signals, the spectral boundaries of the harmonics are clearly separated and thus easy to filter; however, the available imaging bandwidth is underused. Wideband excitation is preferable for harmonic imaging applications to preserve axial resolution, but it generates spectrally overlapping harmonics that are not possible to filter in time and frequency domains. After pulse compression, this overlap increases the range side lobes, which appear as imaging artifacts and reduce the Bmode image quality. In this study, the isolation of higher order harmonics was achieved in another domain by using the fan chirp transform (FChT). To show the effect of excitation bandwidth in superharmonic imaging, measurements were performed by using linear frequency modulated chirp excitation with varying bandwidths of 10% to 50%. Superharmonic imaging was performed on a wire phantom using a wideband chirp excitation. Results were presented with and without applying the FChT filtering technique by comparing the spatial resolution and side lobe levels. Wideband excitation signals achieved a better resolution as expected, however range side lobes as high as -23 dB were observed for the superharmonic component of chirp excitation with 50% fractional bandwidth. The proposed filtering technique achieved >50 dB range side lobe suppression and improved the image quality without

  4. Probability Distribution Estimation for Autoregressive Pixel-Predictive Image Coding.

    PubMed

    Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André

    2016-03-01

    Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model. The analysis of image stationarity properties further allows deriving a novel formula for weight computation in weighted least-squares proofing and generalizing ad hoc equations from the literature. For sparse intensity distributions in non-natural images, a modified image model is presented. Evaluations were done in the newly developed C++ framework volumetric, artificial, and natural image lossless coder (Vanilc), which can compress a wide range of images, including 16-bit medical 3D volumes or multichannel data. A comparison with several of the best available lossless image codecs proofs that the method can achieve very competitive compression ratios. In terms of reproducible research, the source code of Vanilc has been made public.

  5. Fractal structure of asphaltene aggregates.

    PubMed

    Rahmani, Nazmul H G; Dabros, Tadeusz; Masliyah, Jacob H

    2005-05-15

    A photographic technique coupled with image analysis was used to measure the size and fractal dimension of asphaltene aggregates formed in toluene-heptane solvent mixtures. First, asphaltene aggregates were examined in a Couette device and the fractal-like aggregate structures were quantified using boundary fractal dimension. The evolution of the floc structure with time was monitored. The relative rates of shear-induced aggregation and fragmentation/restructuring determine the steady-state floc structure. The average floc structure became more compact or more organized as the floc size distribution attained steady state. Moreover, the higher the shear rate is, the more compact the floc structure is at steady state. Second, the fractal dimensions of asphaltene aggregates were also determined in a free-settling test. The experimentally determined terminal settling velocities and characteristic lengths of the aggregates were utilized to estimate the 2D and 3D fractal dimensions. The size-density fractal dimension (D(3)) of the asphaltene aggregates was estimated to be in the range from 1.06 to 1.41. This relatively low fractal dimension suggests that the asphaltene aggregates are highly porous and very tenuous. The aggregates have a structure with extremely low space-filling capacity.

  6. A comparison of the texture of computed tomography and projection radiography images of vertebral trabecular bone using fractal signature and lacunarity.

    PubMed

    Dougherty, G

    2001-06-01

    The structural integrity of trabecular bone is an important factor characterizing the biomechanical strength of the vertebra, and is determined by the connectivity of the bone network and the trabeculation pattern. These can be assessed using texture measures such as the fractal signature and lacunarity from a high resolution projection radiograph. Using central sections of lumbar vertebrae we compared the results obtained from high-resolution transverse projection images with those obtained from spatially registered low-resolution images from a conventional clinical CT scanner to determine whether clinical CT data can provide useful structural information. Provided the power spectra of the CT images are corrected for image system blurring, the resulting fractal signature is similar for both modalities. Although the CT images are blurred relative to the projection images, with a consequent reduction in lacunarity, the estimated trabecular separation obtained from the lacunarity plots is similar for both modalities. This suggests that these texture measures contain essential information on trabecular microarchitecture, which is present even in low resolution CT images. Such quantitative texture measurements from CT or MRI images are potentially useful in monitoring bone strength and predicting future fracture risk.

  7. A Brief Historical Introduction to Fractals and Fractal Geometry

    ERIC Educational Resources Information Center

    Debnath, Lokenath

    2006-01-01

    This paper deals with a brief historical introduction to fractals, fractal dimension and fractal geometry. Many fractals including the Cantor fractal, the Koch fractal, the Minkowski fractal, the Mandelbrot and Given fractal are described to illustrate self-similar geometrical figures. This is followed by the discovery of dynamical systems and…

  8. A Brief Historical Introduction to Fractals and Fractal Geometry

    ERIC Educational Resources Information Center

    Debnath, Lokenath

    2006-01-01

    This paper deals with a brief historical introduction to fractals, fractal dimension and fractal geometry. Many fractals including the Cantor fractal, the Koch fractal, the Minkowski fractal, the Mandelbrot and Given fractal are described to illustrate self-similar geometrical figures. This is followed by the discovery of dynamical systems and…

  9. Color-coded visualization of magnetic resonance imaging multiparametric maps

    PubMed Central

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data. PMID:28112222

  10. Adaptive lapped transform-based image and video coding

    NASA Astrophysics Data System (ADS)

    Klausutis, Timothy J.; Madisetti, Vijay K.

    1997-01-01

    We propose a design framework for perfectly reconstructed time-varying linear-phase paraunitary filter bands using a novel adaptive lapped transform (ALT). The ALT is based on the Generalized Lapped Orthogonal Transform (GenLOT) proposed by Queiroz. A time-varying filter bank is constructed through the factorization of the GenLOT into cascaded matrix stages. Variable length lapped transforms are subsequently generated by cascading a number of these matrix stages to build specific length filters. Several constraints on the design ensure perfect reconstruction and a fast implementation. An embedded ALT images codec is presented and the application of the ALT to the H.263 video coding standard is discussed. Preliminary results show that the ALT-based embedded image codec has a 1.61-2.35 and a 2.37-4.04 dB increase in peak signal-to-noise ratios (PSNR) compared to the JPEG image coding standard for the Lenna and Barbara test images, respectively.

  11. Coded-aperture Raman imaging for standoff explosive detection

    NASA Astrophysics Data System (ADS)

    McCain, Scott T.; Guenther, B. D.; Brady, David J.; Krishnamurthy, Kalyani; Willett, Rebecca

    2012-06-01

    This paper describes the design of a deep-UV Raman imaging spectrometer operating with an excitation wavelength of 228 nm. The designed system will provide the ability to detect explosives (both traditional military explosives and home-made explosives) from standoff distances of 1-10 meters with an interrogation area of 1 mm x 1 mm to 200 mm x 200 mm. This excitation wavelength provides resonant enhancement of many common explosives, no background fluorescence, and an enhanced cross-section due to the inverse wavelength scaling of Raman scattering. A coded-aperture spectrograph combined with compressive imaging algorithms will allow for wide-area interrogation with fast acquisition rates. Coded-aperture spectral imaging exploits the compressibility of hyperspectral data-cubes to greatly reduce the amount of acquired data needed to interrogate an area. The resultant systems are able to cover wider areas much faster than traditional push-broom and tunable filter systems. The full system design will be presented along with initial data from the instrument. Estimates for area scanning rates and chemical sensitivity will be presented. The system components include a solid-state deep-UV laser operating at 228 nm, a spectrograph consisting of well-corrected refractive imaging optics and a reflective grating, an intensified solar-blind CCD camera, and a high-efficiency collection optic.

  12. Color-coded visualization of magnetic resonance imaging multiparametric maps

    NASA Astrophysics Data System (ADS)

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.

  13. New approach to image coding using 1-D subband filtering

    NASA Astrophysics Data System (ADS)

    Yu, Tian-Hu; Mitra, Sanjit K.

    1991-06-01

    Conventional subband coding for image data compression uses 2D separable QMF banks in which the analysis and synthesis filters are composed of 1D filters. Such an implementation produces a large size output image as a result of the convolution process. Various signal extension methods have been proposed to solve this problem. However, these methods have one or more of the following drawbacks: generation of boundary noise, inability to guarantee aliasing cancellation, and increased computation complexity. In this paper, we present an alternative solution to the problem by converting a 2D image array to a 1D array and then using a 1D QMF bank to process the 1D signal. In our approach, most of the above drawbacks mentioned above are eliminated. In addition, our approach offers more flexibility in the type of the filter that can be implemented.

  14. JND measurements and wavelet-based image coding

    NASA Astrophysics Data System (ADS)

    Shen, Day-Fann; Yan, Loon-Shan

    1998-06-01

    Two major issues in image coding are the effective incorporation of human visual system (HVS) properties and the effective objective measure for evaluating image quality (OQM). In this paper, we treat the two issues in an integrated fashion. We build a JND model based on the measurements of the JND (Just Noticeable Difference) property of HVS. We found that JND does not only depend on the background intensity but also a function of both spatial frequency and patten direction. Wavelet transform, due to its excellent simultaneous Time (space)/frequency resolution, is the best choice to apply the JND model. We mathematically derive an OQM called JND_PSNR that is based on the JND property and wavelet decomposed subbands. JND_PSNR is more consistent with human perception and is recommended as an alternative to the PSNR or SNR. With the JND_PSNR in mind, we proceed to propose a wavelet and JND based codec called JZW. JZW quantizes coefficients in each subband with proper step size according to the subband's importance to human perception. Many characteristics of JZW are discussed, its performance evaluated and compared with other famous algorithms such as EZW, SPIHT and TCCVQ. Our algorithm has 1 - 1.5 dB gain over SPIHT even when we use simple Huffman coding rather than the more efficient adaptive arithmetic coding.

  15. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  16. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, Vinay A.; Farvardin, Nariman

    1990-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimiaation of a scheme based on the 2-D block cosine transorm when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noise channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  17. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, Vinay A.; Farvardin, Nariman

    1990-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimiaation of a scheme based on the 2-D block cosine transorm when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noise channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  18. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  19. Alpha-spectrometry and fractal analysis of surface micro-images for characterisation of porous materials used in manufacture of targets for laser plasma experiments

    NASA Astrophysics Data System (ADS)

    Aushev, A. A.; Barinov, S. P.; Vasin, M. G.; Drozdov, Yu M.; Ignat'ev, Yu V.; Izgorodin, V. M.; Kovshov, D. K.; Lakhtikov, A. E.; Lukovkina, D. D.; Markelov, V. V.; Morovov, A. P.; Shishlov, V. V.

    2015-06-01

    We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied by alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples.

  20. Alpha-spectrometry and fractal analysis of surface micro-images for characterisation of porous materials used in manufacture of targets for laser plasma experiments

    SciTech Connect

    Aushev, A A; Barinov, S P; Vasin, M G; Drozdov, Yu M; Ignat'ev, Yu V; Izgorodin, V M; Kovshov, D K; Lakhtikov, A E; Lukovkina, D D; Markelov, V V; Morovov, A P; Shishlov, V V

    2015-06-30

    We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied by alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples. (laser plasma)

  1. The imaging and the fractal metrology of chimeric liposomal Drug Delivery nano Systems: the role of macromolecular architecture of polymeric guest.

    PubMed

    Pippa, Natassa; Pispas, Stergios; Demetzos, Costas

    2014-09-01

    The major advance of mixed liposomes (the so-called chimeric systems) is to control the size, structure, and morphology of these nanoassemblies, and therefore, system colloidal properties, with the aid of a large variety of parameters, such as chemical architecture and composition. The goal of this study is to investigate the alterations of the physicochemical and morphological characteristics of chimeric dipalmitoylphosphatidylcholine (DPPC) liposomes, caused by the incorporation of block and gradient copolymers (different macromolecular architecture) with different chemical compositions (different amounts of hydrophobic component). Light scattering techniques were utilized in order to characterize physicochemically and to delineate the fractal morphology of chimeric liposomes. In this study, we also investigated the structural differences between the prepared chimeric liposomes as are visualized by scanning electron microscopy (SEM). It could be concluded that all the chimeric liposomes have regular structure, as SEM images revealed, while their fractal dimensionality was found to be dependent on the macromolecular architecture of the polymeric guest.

  2. Significance-linked connected component analysis for wavelet image coding.

    PubMed

    Chai, B B; Vass, J; Zhuang, X

    1999-01-01

    Recent success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.

  3. A progressively predictive image pyramid for efficient lossless coding.

    PubMed

    Qiu, G

    1999-01-01

    A low entropy pyramidal image data structure suited for lossless coding and progressive transmission is proposed in this work. The new coder, called the progressively predictive pyramid (PPP) is based on the well-known Laplacian pyramid. By introducing inter-resolution predictors into the original Laplacian pyramid, we show that the entropy level in the original pyramid can be reduced significantly. To take full advantage of progressive transmission, a scheme is introduced to create the predictor adaptively, thus eliminating the need to transmit the predictor and reducing the coding overheads. A method for designing the predictor is presented. Numerical results show that PPP is superior to traditional approaches to pyramid generation in the sense that the pyramids generated by PPP always have significantly lower entropy values.

  4. Content-based histopathological image retrieval for whole slide image database using binary codes

    NASA Astrophysics Data System (ADS)

    Zheng, Yushan; Jiang, Zhiguo; Ma, Yibing; Zhang, Haopeng; Xie, Fengying; Shi, Huaqiang; Zhao, Yu

    2017-03-01

    Content-based image retrieval (CBIR) has been widely researched for medical images. In application of histo- pathological images, there are two issues that need to be carefully considered. The one is that the digital slide is stored in a spatially continuous image with a size of more than 10K x 10K pixels. The other is that the size of query image varies in a large range according to different diagnostic conditions. It is a challenging work to retrieve the eligible regions for the query image from the database that consists of whole slide images (WSIs). In this paper, we proposed a CBIR framework for the WSI database and size-scalable query images. Each WSI in the database is encoded and stored in a matrix of binary codes. When retrieving, the query image is first encoded into a set of binary codes and analyzed to pre-choose a set of regions from database using hashing method. Then a multi-binary-code-based similarity measurement based on hamming distance is designed to rank proposal regions. Finally, the top relevant regions and their locations in the WSIs along with the diagnostic information are returned to assist pathologists in diagnoses. The effectiveness of the proposed framework is evaluated in a fine-annotated WSIs database of epithelial breast tumors. The experimental results show that proposed framework is both effective and efficiency for content-based whole slide image retrieval.

  5. The influence of respiratory motion on the cumulative SUV-volume histogram and fractal analyses of intratumoral heterogeneity in PET/CT imaging.

    PubMed

    Takeshita, Toshiki; Morita, Keishin; Tsutsui, Yuji; Kidera, Daisuke; Mikasa, Shohei; Maebatake, Akira; Akamatsu, Go; Miwa, Kenta; Baba, Shingo; Sasaki, Masayuki

    2016-07-01

    The purpose of this study was to investigate the influence of respiratory motion on the evaluation of the intratumoral heterogeneity of FDG uptake using cumulative SUV-volume histogram (CSH) and fractal analyses. We used an NEMA IEC body phantom with a homogeneous hot sphere phantom (HO) and two heterogeneous hot sphere phantoms (HE1 and HE2). The background radioactivity of (18)F in the NEMA phantom was 5.3 kBq/mL. The ratio of radioactivity was 4:2:1 for the HO and the outer rims of the HE1 and HE2 phantoms, the inner cores of the HE1 and HE2 phantoms, and background, respectively. Respiratory motion was simulated using a motion table with an amplitude of 2 cm. PET/CT data were acquired using Biograph mCT in motionless and moving conditions. The PET images were analyzed by both CSH and fractal analyses. The area under the CSH (AUC-CSH) and the fractal dimension (FD) was used as quantitative metrics. In motionless conditions, the AUC-CSHs of the HO (0.80), HE1 (0.75) and HE2 (0.65) phantoms were different. They did not differ in moving conditions (HO, 0.63; HE1, 0.65; HE2, 0.60). The FD of the HO phantom (0.77) was smaller than the FDs of the HE1 (1.71) and HE2 (1.98) phantoms in motionless conditions; however, the FDs of the HO (1.99) and HE1 (2.19) phantoms were not different from each other and were smaller than that of the HE2 (3.73) phantom in moving conditions. Respiratory motion affected the results of the CSH and fractal analyses for the evaluation of the heterogeneity of the PET/CT images. The influence of respiratory motion was considered to vary depending on the object size.

  6. Statistical and fractal analysis of autofluorescent myocardium images in posthumous diagnostics of acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Garazdiuk, M.; Motrich, A. V.

    2014-08-01

    This research presents the results of investigation of laser polarization fluorescence of biological layers (histological sections of the myocardium). The polarized structure of autofluorescence imaging layers of biological tissues was detected and investigated. Proposed the model of describing the formation of polarization inhomogeneous of autofluorescence imaging biological optically anisotropic layers. On this basis, analytically and experimentally tested to justify the method of laser polarimetry autofluorescent. Analyzed the effectiveness of this method in the postmortem diagnosis of infarction. The objective criteria (statistical moments) of differentiation of autofluorescent images of histological sections myocardium were defined. The operational characteristics (sensitivity, specificity, accuracy) of these technique were determined.

  7. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  8. HD Photo: a new image coding technology for digital photography

    NASA Astrophysics Data System (ADS)

    Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.

    2007-09-01

    This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.

  9. Lossless predictive coding for images with Bayesian treatment.

    PubMed

    Liu, Jing; Zhai, Guangtao; Yang, Xiaokang; Chen, Li

    2014-12-01

    Adaptive predictor has long been used for lossless predictive coding of images. Most of existing lossless predictive coding techniques mainly focus on suitability of prediction model for training set with the underlying assumption of local consistency, which may not hold well on object boundaries and cause large predictive error. In this paper, we propose a novel approach based on the assumption that local consistency and patch redundancy exist simultaneously in natural images. We derive a family of linear models and design a new algorithm to automatically select one suitable model for prediction. From the Bayesian perspective, the model with maximum posterior probability is considered as the best. Two types of model evidence are included in our algorithm. One is traditional training evidence, which represents the models’ suitability for current pixel under the assumption of local consistency. The other is target evidence, which is proposed to express the preference for different models from the perspective of patch redundancy. It is shown that the fusion of training evidence and target evidence jointly exploits the benefits of local consistency and patch redundancy. As a result, our proposed predictor is more suitable for natural images with textures and object boundaries. Comprehensive experiments demonstrate that the proposed predictor achieves higher efficiency compared with the state-of-the-art lossless predictors.

  10. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  11. Music and fractals

    NASA Astrophysics Data System (ADS)

    Wuorinen, Charles

    2015-03-01

    Any of the arts may produce exemplars that have fractal characteristics. There may be fractal painting, fractal poetry, and the like. But these will always be specific instances, not necessarily displaying intrinsic properties of the art-medium itself. Only music, I believe, of all the arts possesses an intrinsically fractal character, so that its very nature is fractally determined. Thus, it is reasonable to assert that any instance of music is fractal...

  12. Recursive time-varying filter banks for subband image coding

    NASA Technical Reports Server (NTRS)

    Smith, Mark J. T.; Chung, Wilson C.

    1992-01-01

    Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.

  13. Coded aperture imaging with self-supporting uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.

    1983-01-01

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.

  14. Fractal analysis of complex microstructure in castings

    SciTech Connect

    Lu, S.Z.; Lipp, D.C.; Hellawell, A.

    1995-12-31

    Complex microstructures in castings are usually characterized descriptively which often raises ambiguity and makes it difficult to relate the microstructure to the growth kinetics or mechanical properties in processing modeling. Combining the principle of fractal geometry and computer image processing techniques, it is feasible to characterize the complex microstructures numerically by the parameters of fractal dimension, D, and shape factor, a, without ambiguity. Procedures of fractal measurement and analysis are described, and a test case of its application to cast irons is provided. The results show that the irregular cast structures may all be characterized numerically by fractal analysis.

  15. Fractal signatures in the aperiodic Fibonacci grating.

    PubMed

    Verma, Rupesh; Banerjee, Varsha; Senthilkumaran, Paramasivam

    2014-05-01

    The Fibonacci grating (FbG) is an archetypal example of aperiodicity and self-similarity. While aperiodicity distinguishes it from a fractal, self-similarity identifies it with a fractal. Our paper investigates the outcome of these complementary features on the FbG diffraction profile (FbGDP). We find that the FbGDP has unique characteristics (e.g., no reduction in intensity with increasing generations), in addition to fractal signatures (e.g., a non-integer fractal dimension). These make the Fibonacci architecture potentially useful in image forming devices and other emerging technologies.

  16. Conversion of raster coded images to polygonal data structures

    NASA Technical Reports Server (NTRS)

    Nichols, D. A.

    1982-01-01

    A method is presented for converting polygons coded in raster data structures into conventional vector structures to allow the output of scanner-based data collection systems to be input directly to conventional geographic information systems. The method relies on topological principles to (1) uniquely label each polygon in the image and produce an output image in which each pixel is described by the label of the polygon to which it belongs; (2) create line segment components of polygon boundaries, with nodes labeled and the two adjacent polygons identified; and (3) traverse the polygon boundaries by connecting the appropriate adjacent line segments. The conversion capability makes it possible to design systems which automatically convert to the data structure most appropriate for a particular application.

  17. Porosity imaged by a vector projection algorithm correlates with fractal dimension measured on 3D models obtained by microCT.

    PubMed

    Chappard, Daniel; Stancu, Izabela-Cristina

    2015-04-01

    Porosity is an important factor to consider in a large variety of materials. Porosity can be visualized in bone or 3D synthetic biomaterials by microcomputed tomography (microCT). Blocks of porous poly(2-hydroxyethyl methacrylate) were prepared with polystyrene beads of different diameter (500, 850, 1160 and 1560 μm) and analysed by microCT. On each 2D binarized microCT section, pixels of the pores which belong to the same image column received the same pseudo-colour according to a look up table. The same colour was applied on the same column of a frontal plane image which was constructed line by line from all images of the microCT stack. The fractal dimension Df of the frontal plane image was measured as well as the descriptors of the 3D models (porosity, 3D fractal dimension D3D, thickness, density and separation of material walls. Porosity, thickness Df and D3D increased with the size of the porogen beads. A linear correlation was observed between Df and D3D. This method provides quantitative and qualitative analysis of porosity on a single frontal plane image of a porous object. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  18. Sparse coding based feature representation method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  19. Fractal pattern of canine trichoblastoma.

    PubMed

    De Vico, Gionata; Cataldi, Marielda; Maiolino, Paola; Carella, Francesca; Beltraminelli, Stefano; Losa, Gabriele A

    2011-06-01

    To assess by fractal analysis the specific architecture, growth pattern, and tissue distribution that characterize subtypes of canine trichoblastoma, a benign tumor derived from or reduplicating the primitive hair germ of embryonic follicular development. Tumor masks and outlines obtained from immunohistologic images by gray threshold segmentation of epithelial components were analyzed by fractal and conventional morphometry. The fractal dimension [FD] of each investigated case was determined from the slope of the regression line describing the fractal region within a bi-asymptotic curve experimentally established. All tumor masks and outlines obtained by gray threshold segmentation of epithelial components showed fractal self-similar properties that were evaluated by peculiar FDs. However, only masks revealed significantly different FD values, ranging from 1.75 to 1.85, enabling the discrimination of canine trichoblastoma subtypes. The FD data suggest that an iterative morphogenetic process, involving both the air germ and associated dermal papilla, may be responsible of the peculiar tissue architecture of trichoblastoma. The present study emphasized the reliability of fractal analysis in achieving the objective characterization of canine trichoblastoma.

  20. High-resolution imaging using a translating coded aperture

    NASA Astrophysics Data System (ADS)

    Mahalanobis, Abhijit; Shilling, Richard; Muise, Robert; Neifeld, Mark

    2017-08-01

    It is well known that a translating mask can optically encode low-resolution measurements from which higher resolution images can be computationally reconstructed. We experimentally demonstrate that this principle can be used to achieve substantial increase in image resolution compared to the size of the focal plane array (FPA). Specifically, we describe a scalable architecture with a translating mask (also referred to as a coded aperture) that achieves eightfold resolution improvement (or 64∶1 increase in the number of pixels compared to the number of focal plane detector elements). The imaging architecture is described in terms of general design parameters (such as field of view and angular resolution, dimensions of the mask, and the detector and FPA sizes), and some of the underlying design trades are discussed. Experiments conducted with different mask patterns and reconstruction algorithms illustrate how these parameters affect the resolution of the reconstructed image. Initial experimental results also demonstrate that the architecture can directly support task-specific information sensing for detection and tracking, and that moving objects can be reconstructed separately from the stationary background using motion priors.

  1. Microbialites on Mars: a fractal analysis of the Athena's microscopic images

    NASA Astrophysics Data System (ADS)

    Bianciardi, G.; Rizzo, V.; Cantasano, N.

    2015-10-01

    The Mars Exploration Rovers investigated Martian plains where laminated sedimentary rocks are present. The Athena morphological investigation [1] showed microstructures organized in intertwined filaments of microspherules: a texture we have also found on samples of terrestrial (biogenic) stromatolites and other microbialites and not on pseudo-abiogenicstromatolites. We performed a quantitative image analysis in order to compare 50 microbialites images with 50 rovers (Opportunity and Spirit) ones (approximately 30,000/30,000 microstructures). Contours were extracted and morphometric indexes obtained: geometric and algorithmic complexities, entropy, tortuosity, minimum and maximum diameters. Terrestrial and Martian textures resulted multifractals. Mean values and confidence intervals from the Martian images overlapped perfectly with those from terrestrial samples. The probability of this occurring by chance was less than 1/28, p<0.004. Our work show the evidence of a widespread presence of microbialites in the Martian outcroppings: i.e., the presence of unicellular life on the ancient Mars, when without any doubt, liquid water flowed on the Red Planet.

  2. Fractal Characterization of Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Qiu, Hon-Iie; Lam, Nina Siu-Ngan; Quattrochi, Dale A.; Gamon, John A.

    1999-01-01

    Two Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral images selected from the Los Angeles area, one representing urban and the other, rural, were used to examine their spatial complexity across their entire spectrum of the remote sensing data. Using the ICAMS (Image Characterization And Modeling System) software, we computed the fractal dimension values via the isarithm and triangular prism methods for all 224 bands in the two AVIRIS scenes. The resultant fractal dimensions reflect changes in image complexity across the spectral range of the hyperspectral images. Both the isarithm and triangular prism methods detect unusually high D values on the spectral bands that fall within the atmospheric absorption and scattering zones where signature to noise ratios are low. Fractal dimensions for the urban area resulted in higher values than for the rural landscape, and the differences between the resulting D values are more distinct in the visible bands. The triangular prism method is sensitive to a few random speckles in the images, leading to a lower dimensionality. On the contrary, the isarithm method will ignore the speckles and focus on the major variation dominating the surface, thus resulting in a higher dimension. It is seen where the fractal curves plotted for the entire bandwidth range of the hyperspectral images could be used to distinguish landscape types as well as for screening noisy bands.

  3. Phase transfer function based method to alleviate image artifacts in wavefront coding imaging system

    NASA Astrophysics Data System (ADS)

    Mo, Xutao; Wang, Jinjiang

    2013-09-01

    Wavefront coding technique can extend the depth of filed (DOF) of the incoherent imaging system. Several rectangular separable phase masks (such as cubic type, exponential type, logarithmic type, sinusoidal type, rational type, et al) have been proposed and discussed, because they can extend the DOF up to ten times of the DOF of ordinary imaging system. But according to the research on them, researchers have pointed out that the images are damaged by the artifacts, which usually come from the non-linear phase transfer function (PTF) differences between the PTF used in the image restoration filter and the PTF related to real imaging condition. In order to alleviate the image artifacts in imaging systems with wavefront coding, an optimization model based on the PTF was proposed to make the PTF invariance with the defocus. Thereafter, an image restoration filter based on the average PTF in the designed depth of field was introduced along with the PTF-based optimization. The combination of the optimization and the image restoration proposed can alleviate the artifacts, which was confirmed by the imaging simulation of spoke target. The cubic phase mask (CPM) and exponential phase mask (EPM) were discussed as example.

  4. Fractal dimension and lacunarity of tumor microscopic images as prognostic indicators of clinical outcome in early breast cancer.

    PubMed

    Pribic, Jelena; Vasiljevic, Jelena; Kanjer, Ksenija; Konstantinovic, Zora Neskovic; Milosevic, Nebojsa T; Vukosavljevic, Dragica Nikolic; Radulovic, Marko

    2015-01-01

    Research in the field of breast cancer outcome prognosis has been focused on molecular biomarkers, while neglecting the discovery of novel tumor histology structural clues. We thus aimed to improve breast cancer prognosis by fractal analysis of tumor histomorphology. This retrospective study included 92 breast cancer patients without systemic treatment. Fractal dimension and lacunarity of the breast tumor microscopic histology possess prognostic value comparable to the major clinicopathological prognostic parameters. Fractal analysis was performed for the first time on routinely produced archived pan-tissue stained primary breast tumor sections, indicating its potential for clinical use as a simple and cost-effective prognostic indicator of distant metastasis risk to complement the molecular approaches for cancer risk prognosis.

  5. Image amplification based super-resolution reconstruction procedure designed for wavefront-coded imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Zong, Caihui; Wei, Jingxuan; Xie, Xiaopeng

    2016-10-01

    Wave-front coding, proposed by Dowski and Cathey in 1995, is widely known to be capable of extending the depth of focus (DOF) of incoherent imaging systems. However, benefiting from its very large point spread function (PSF) generated by a suitably designed phase mask that is added to the aperture plane, wave-front coding could also be used to achieve super-resolution without replacing the current sensor with one of smaller pitch size. An image amplification based super-resolution reconstruction procedure has been specifically designed for wave-front coded imaging systems and its effectiveness has been tested by experiment. For instance, for a focal length of 50 mm and f-number 4.5, objects within the range [5 m, ∞] are clearly imaged with the help of wave-front coding, which indicates a DOF extension ratio of approximately 20. The proposed super-resolution reconstruction procedure produces at least 3× resolution improvement, with the quality of the reconstructed super-resolution image approaching the diffraction limit.

  6. Categorizing biomedicine images using novel image features and sparse coding representation

    PubMed Central

    2013-01-01

    Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are

  7. Fractal Stiffening

    NASA Technical Reports Server (NTRS)

    Harper, David William (Inventor)

    2017-01-01

    A structural support having fractal-stiffening and method of fabricating the support is presented where an optimized location of at least three nodes is predetermined prior to fabricating the structural support where a first set of webs is formed on one side of the support and joined to the nodes to form a first pocket region. A second set of webs is formed within the first pocket region forming a second pocket region where the height of the first set of webs extending orthogonally from the side of the support is greater than the second set of webs extending orthogonally from the support.

  8. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  9. Multiscale differential fractal feature with application to target detection

    NASA Astrophysics Data System (ADS)

    Shi, Zelin; Wei, Ying; Huang, Shabai

    2004-07-01

    A multiscale differential fractal feature of an image is proposed and a small target detection method from complex nature clutter is presented. Considering the speciality that the fractal features of man-made objects change much more violently than that of nature's when the scale is varied, fractal features at multiple scales used for distinguishing man-made target from nature clutter should have more advantages over standard fractal dimensions. Multiscale differential fractal dimensions are deduced from typical fractal model and standard covering-blanket method is improved and used to estimate multiscale fractal dimensions. A multiscale differential fractal feature is defined as the variation of fractal dimensions between two scales at a rational scale range. It can stand out the fractal feature of man-made object from natural clutters much better than the fractal dimension by standard covering-blanket method. Meanwhile, the calculation and the storage amount are reduced greatly, they are 4/M and 2/M that of the standard covering-blanket method respectively (M is scale). In the image of multiscale differential fractal feature, local gray histogram statistical method is used for target detection. Experiment results indicate that this method is suitable for both kinds background of land and sea. It also can be appropriate in both kinds of infrared and TV images, and can detect small targets from a single frame correctly. This method is with high speed and is easy to be implemented.

  10. Segmentation of histological structures for fractal analysis

    NASA Astrophysics Data System (ADS)

    Dixon, Vanessa; Kouznetsov, Alexei; Tambasco, Mauro

    2009-02-01

    Pathologists examine histology sections to make diagnostic and prognostic assessments regarding cancer based on deviations in cellular and/or glandular structures. However, these assessments are subjective and exhibit some degree of observer variability. Recent studies have shown that fractal dimension (a quantitative measure of structural complexity) has proven useful for characterizing structural deviations and exhibits great potential for automated cancer diagnosis and prognosis. Computing fractal dimension relies on accurate image segmentation to capture the architectural complexity of the histology specimen. For this purpose, previous studies have used techniques such as intensity histogram analysis and edge detection algorithms. However, care must be taken when segmenting pathologically relevant structures since improper edge detection can result in an inaccurate estimation of fractal dimension. In this study, we established a reliable method for segmenting edges from grayscale images. We used a Koch snowflake, an object of known fractal dimension, to investigate the accuracy of various edge detection algorithms and selected the most appropriate algorithm to extract the outline structures. Next, we created validation objects ranging in fractal dimension from 1.3 to 1.9 imitating the size, structural complexity, and spatial pixel intensity distribution of stained histology section images. We applied increasing intensity thresholds to the validation objects to extract the outline structures and observe the effects on the corresponding segmentation and fractal dimension. The intensity threshold yielding the maximum fractal dimension provided the most accurate fractal dimension and segmentation, indicating that this quantitative method could be used in an automated classification system for histology specimens.

  11. Texture analysis of diagnostic x-ray images by use of fractals

    NASA Astrophysics Data System (ADS)

    Lundahl, T.; Ohley, W. J.; Kay, S. M.; White, H.; Williams, D. O.; Most, A. S.

    1986-11-01

    In this work a discrete fractional Brownian motion (FBM) model is applied to xray images as a measure of the regional texture. FBM is a generalization of ordinary Wiener-Levy Brownian motion. A Parameter H is introduced which describes the roughness of the realizations. Using generated realizations, a Cramer-Rao bound for the variance of an estimate of Hwas evaluated using asymptotic statistics. The results show that the accuracy of the estimate is independent of the true H. A maximum likelihood estimator is derived for H and applied to the data sets The results were close to the C-R bound. The MLE is then applied to sequences of digital coronary angiograms. The results show that the H parameter is a useful index by which to segment vessels from background noise.

  12. Magnetohydrodynamics of fractal media

    SciTech Connect

    Tarasov, Vasily E.

    2006-05-15

    The fractal distribution of charged particles is considered. An example of this distribution is the charged particles that are distributed over the fractal. The fractional integrals are used to describe fractal distribution. These integrals are considered as approximations of integrals on fractals. Typical turbulent media could be of a fractal structure and the corresponding equations should be changed to include the fractal features of the media. The magnetohydrodynamics equations for fractal media are derived from the fractional generalization of integral Maxwell equations and integral hydrodynamics (balance) equations. Possible equilibrium states for these equations are considered.

  13. Sequential Compact Code Learning for Unsupervised Image Hashing.

    PubMed

    Liu, Li; Shao, Ling

    2016-12-01

    Effective hashing for large-scale image databases is a popular research area, attracting much attention in computer vision and visual information retrieval. Several recent methods attempt to learn either graph embedding or semantic coding for fast and accurate applications. In this paper, a novel unsupervised framework, termed evolutionary compact embedding (ECE), is introduced to automatically learn the task-specific binary hash codes. It can be regarded as an optimization algorithm that combines the genetic programming (GP) and a boosting trick. In our architecture, each bit of ECE is iteratively computed using a weak binary classification function, which is generated through GP evolving by jointly minimizing its empirical risk with the AdaBoost strategy on a training set. We address this as greedy optimization by embedding high-dimensional data points into a similarity-preserved Hamming space with a low dimension. We systematically evaluate ECE on two data sets, SIFT 1M and GIST 1M, showing the effectiveness and the accuracy of our method for a large-scale similarity search.

  14. Coded aperture subreflector array for high resolution radar imaging

    NASA Astrophysics Data System (ADS)

    Lynch, Jonathan J.; Herrault, Florian; Kona, Keerti; Virbila, Gabriel; McGuire, Chuck; Wetzel, Mike; Fung, Helen; Prophet, Eric

    2017-05-01

    HRL Laboratories has been developing a new approach for high resolution radar imaging on stationary platforms. High angular resolution is achieved by operating at 235 GHz and using a scalable tile phased array architecture that has the potential to realize thousands of elements at an affordable cost. HRL utilizes aperture coding techniques to minimize the size and complexity of the RF electronics needed for beamforming, and wafer level fabrication and integration allow tiles containing 1024 elements to be manufactured with reasonable costs. This paper describes the results of an initial feasibility study for HRL's Coded Aperture Subreflector Array (CASA) approach for a 1024 element micromachined antenna array with integrated single-bit phase shifters. Two candidate electronic device technologies were evaluated over the 170 - 260 GHz range, GaN HEMT transistors and GaAs Schottky diodes. Array structures utilizing silicon micromachining and die bonding were evaluated for etch and alignment accuracy. Finally, the overall array efficiency was estimated to be about 37% (not including spillover losses) using full wave array simulations and measured device performance, which is a reasonable value at 235 GHz. Based on the measured data we selected GaN HEMT devices operated passively with 0V drain bias due to their extremely low DC power dissipation.

  15. Adaptation of a neutron diffraction detector to coded aperture imaging

    SciTech Connect

    Vanier, P.E.; Forman, L.

    1997-02-01

    A coded aperture neutron imaging system developed at Brookhaven National Laboratory (BNL) has demonstrated that it is possible to record not only a flux of thermal neutrons at some position, but also the directions from whence they came. This realization of an idea which defied the conventional wisdom has provided a device which has never before been available to the nuclear physics community. A number of potential applications have been explored, including (1) counting warheads on a bus or in a storage area, (2) investigating inhomogeneities in drums of Pu-containing waste to facilitate non-destructive assays, (3) monitoring of vaults containing accountable materials, (4) detection of buried land mines, and (5) locating solid deposits of nuclear material held up in gaseous diffusion plants.

  16. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  17. The application of coded excitation technology in medical ultrasonic Doppler imaging

    NASA Astrophysics Data System (ADS)

    Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin

    2008-03-01

    Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.

  18. The tuning of human visual cortex to variations in the 1/f(α) amplitude spectra and fractal properties of synthetic noise images.

    PubMed

    Isherwood, Zoey J; Schira, Mark M; Spehar, Branka

    2017-02-01

    Natural scenes share a consistent distribution of energy across spatial frequencies (SF) known as the 1/f(α) amplitude spectrum (α≈0.8-1.5, mean 1.2). This distribution is scale-invariant, which is a fractal characteristic of natural scenes with statistically similar structure at different spatial scales. While the sensitivity of the visual system to the 1/f properties of natural scenes has been studied extensively using psychophysics, relatively little is known about the tuning of cortical responses to these properties. Here, we use fMRI and retinotopic mapping techniques to measure and analyze BOLD responses in early visual cortex (V1, V2, and V3) to synthetic noise images that vary in their 1/f(α) amplitude spectra (α=0.25 to 2.25, step size: 0.50) and contrast levels (10% and 30%) (Experiment 1). To compare the dependence of the BOLD response between the photometric (intensity based) and geometric (fractal) properties of our stimuli, in Experiment 2 we compared grayscale noise images to their binary (thresholded) counterparts, which contain only black and white regions. In both experiments, early visual cortex responded maximally to stimuli generated to have an input 1/f slope corresponding to natural 1/f(α) amplitude spectra, and lower BOLD responses were found for steeper or shallower 1/f slopes (peak modulation: 0.59% for 1.25 vs. 0.31% for 2.25). To control for changing receptive field sizes, responses were also analyzed across multiple eccentricity bands in cortical surface space. For most eccentricity bands, BOLD responses were maximal for natural 1/f(α) amplitude spectra, but importantly there was no difference in the BOLD response to grayscale stimuli and their corresponding thresholded counterparts. Since the thresholding of an image changes its measured 1/f slope (α) but not its fractal characteristics, this suggests that neuronal responses in early visual cortex are not strictly driven by spectral slope values (photometric properties) but

  19. Application of Fractal Dimension on Palsar Data

    NASA Astrophysics Data System (ADS)

    Singh, Dharmendra; Pant, Triloki

    Study of land cover is the primal task of remote sensing where microwave imaging plays an important role. As an alternate of optical imaging, microwave, in particular, Synthetic Aperture Radar (SAR) imaging is very popular. With the advancement of technology, multi-polarized images are now available, e.g., ALOS-PALSAR (Phased Array type L-band SAR), which are beneficial because each of the polarization channel shows different sensitivity to various land features. Further, using the textural features, various land classes can be classified on the basis of the textural measures. One of the textural measure is fractal dimension. It is noteworthy that fractal dimension is a measure of roughness and thus various land classes can be distinguished on the basis of their roughness. The value of fractal dimension for the surfaces lies between 2.0 and 3.0 where 2.0 represents a smooth surface while 3.0 represents drastically rough surface. The study area covers subset images lying between 2956'53"N, 7750'32"E and 2950'40"N, 7757'19"E. The PALSAR images of the year 2007 and 2009 are considered for the study. In present paper a fractal based classification of PALSAR images has been performed for identification of Water, Urban and Agricultural Area. Since fractals represent the image texture, hence the present study attempts to find the fractal properties of land covers to distinguish them from one another. For the purpose a context has been defined on the basis of a moving window, which is used to estimate the local fractal dimension and then moved over the whole image. The size of the window is an important issue for estimation of textural measures which is considered to be 55 in present study. This procedure, in response, produces a textural map called fractal map. The fractal map is constituted with the help of local fractal dimension values and can be used for contextual classification. In order to study the fractal properties of PALSAR images, the three polarization images

  20. Jupiter Fractal Art

    NASA Image and Video Library

    2017-08-10

    See Jupiter's Great Red Spot as you've never seen it before in this new Jovian work of art. Artist Mik Petter created this unique, digital artwork using data from the JunoCam imager on NASA's Juno spacecraft. The art form, known as fractals, uses mathematical formulas to create art with an infinite variety of form, detail, color and light. The tumultuous atmospheric zones in and around the Great Red Spot are highlighted by the author's use of colorful fractals. Vibrant colors of various tints and hues, combined with the almost organic-seeming shapes, make this image seem to be a colorized and crowded petri dish of microorganisms, or a close-up view of microscopic and wildly-painted seashells. The original JunoCam image was taken on July 10, 2017 at 7:10 p.m. PDT (10:10 p.m. EDT), as the Juno spacecraft performed its seventh close flyby of Jupiter. The spacecraft captured the image from about 8,648 miles (13,917 kilometers) above the tops of the clouds of the planet at a latitude of -32.6 degrees. https://photojournal.jpl.nasa.gov/catalog/PIA21777

  1. The two-dimensional code image recognition based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Wan, Hao; Peng, Cheng

    2017-01-01

    With the development of information technology, two-dimensional code is more and more widely used. In the technology of two-dimensional code recognition, the noise reduction of the two-dimensional code image is very important. Wavelet transform is applied to the noise reduction of two-dimensional code, and the corresponding Matlab experiment and simulation are made. The results show that the wavelet transform is simple and fast in the noise reduction of two-dimensional code. And it can commendably protect the details of the two-dimensional code image.

  2. Coded tissue harmonic imaging with nonlinear chirp signals.

    PubMed

    Song, Jaehee; Chang, Jin Ho; Song, Tai-kyong; Yoo, Yangmo

    2011-05-01

    Coded tissue harmonic imaging with pulse inversion (CTHI-PI) based on a linear chirp signal can improve the signal-to-noise ratio with minimizing the peak range sidelobe level (PRSL), which is the main advantage over CTHI with bandpass filtering (CTHI-BF). However, the CTHI-PI technique could suffer from motion artifacts due to decreasing frame rate caused by two firings of opposite phase signals for each scanline. In this paper, a new CTHI method based on a nonlinear chirp signal (CTHI-NC) is presented, which can improve the separation of fundamental and harmonic components without sacrificing frame rate. The nonlinear chirp signal is designed to minimize the PRSL value by optimizing its frequency sweep rate and time duration. The performance of the CTHI-NC method was evaluated by measuring the PRSL and mainlobe width after compression. From the in vitro experiments, the CTHI-NC provided the PRSL of -40.6 dB and the mainlobe width of 2.1 μs for the transmit quadratic nonlinear chirp signal with the center frequency of 2.1 MHz, the fractional bandwidth at -6 dB of 0.6 and the time duration of 15 μs. These results indicate that the proposed method could be used for improving frame rates in CTHI while providing comparable image quality to CTHI-PI.

  3. Adaptive zero-tree structure for curved wavelet image coding

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Wang, Demin; Vincent, André

    2006-02-01

    We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.

  4. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  5. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  6. MORPH-I (Ver 1.0) a software package for the analysis of scanning electron micrograph (binary formatted) images for the assessment of the fractal dimension of enclosed pore surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert

    1998-01-01

    MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.

  7. Requirements for imaging vulnerable plaque in the coronary artery using a coded aperture imaging system

    NASA Astrophysics Data System (ADS)

    Tozian, Cynthia

    A coded aperture1 plate was employed on a conventional gamma camera for 3D single photon emission computed tomography (SPECT) imaging on small animal models. The coded aperture design was selected to improve the spatial resolution and decrease the minimum detectable activity (MDA) required to image plaque formation in the APoE (apolipoprotein E) gene deficient mouse model when compared to conventional SPECT techniques. The pattern that was tested was a no-two-holes-touching (NTHT) modified uniformly redundant array (MURA) having 1,920 pinholes. The number of pinholes combined with the thin sintered tungsten plate was designed to increase the efficiency of the imaging modality over conventional gamma camera imaging methods while improving spatial resolution and reducing noise in the image reconstruction. The MDA required to image the vulnerable plaque in a human cardiac-torso mathematical phantom was simulated with a Monte Carlo code and evaluated to determine the optimum plate thickness by a receiver operating characteristic (ROC) yielding the lowest possible MDA and highest area under the curve (AUC). A partial 3D expectation maximization (EM) reconstruction was developed to improve signal-to-noise ratio (SNR), dynamic range, and spatial resolution over the linear correlation method of reconstruction. This improvement was evaluated by imaging a mini hot rod phantom, simulating the dynamic range, and by performing a bone scan of the C-57 control mouse. Results of the experimental and simulated data as well as other plate designs were analyzed for use as a small animal and potentially human cardiac imaging modality for a radiopharmaceutical developed at Bristol-Myers Squibb Medical Imaging Company, North Billerica, MA, for diagnosing vulnerable plaques. If left untreated, these plaques may rupture causing sudden, unexpected coronary occlusion and death. The results of this research indicated that imaging and reconstructing with this new partial 3D algorithm improved

  8. Two-layer and Adaptive Entropy Coding Algorithms for H.264-based Lossless Image Coding

    DTIC Science & Technology

    2008-04-01

    adaptive binary arithmetic coding (CABAC) [7], and context-based adaptive variable length coding (CAVLC) [3], should be adaptively adopted for advancing...Sep. 2006. [7] H. Schwarz, D. Marpe and T. Wiegand, Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard, IEEE

  9. Fractals in art and nature: why do we like them?

    NASA Astrophysics Data System (ADS)

    Spehar, Branka; Taylor, Richard P.

    2013-03-01

    Fractals have experienced considerable success in quantifying the visual complexity exhibited by many natural patterns, and continue to capture the imagination of scientists and artists alike. Fractal patterns have also been noted for their aesthetic appeal, a suggestion further reinforced by the discovery that the poured patterns of the American abstract painter Jackson Pollock are also fractal, together with the findings that many forms of art resemble natural scenes in showing scale-invariant, fractal-like properties. While some have suggested that fractal-like patterns are inherently pleasing because they resemble natural patterns and scenes, the relation between the visual characteristics of fractals and their aesthetic appeal remains unclear. Motivated by our previous findings that humans display a consistent preference for a certain range of fractal dimension across fractal images of various types we turn to scale-specific processing of visual information to understand this relationship. Whereas our previous preference studies focused on fractal images consisting of black shapes on white backgrounds, here we extend our investigations to include grayscale images in which the intensity variations exhibit scale invariance. This scale-invariance is generated using a 1/f frequency distribution and can be tuned by varying the slope of the rotationally averaged Fourier amplitude spectrum. Thresholding the intensity of these images generates black and white fractals with equivalent scaling properties to the original grayscale images, allowing a direct comparison of preferences for grayscale and black and white fractals. We found no significant differences in preferences between the two groups of fractals. For both set of images, the visual preference peaked for images with the amplitude spectrum slopes from 1.25 to 1.5, thus confirming and extending the previously observed relationship between fractal characteristics of images and visual preference.

  10. Video rate spectral imaging using a coded aperture snapshot spectral imager.

    PubMed

    Wagadarikar, Ashwin A; Pitsianis, Nikos P; Sun, Xiaobai; Brady, David J

    2009-04-13

    We have previously reported on coded aperture snapshot spectral imagers (CASSI) that can capture a full frame spectral image in a snapshot. Here we describe the use of CASSI for spectral imaging of a dynamic scene at video rate. We describe significant advances in the design of the optical system, system calibration procedures and reconstruction method. The new optical system uses a double Amici prism to achieve an in-line, direct view configuration, resulting in a substantial improvement in image quality. We describe NeAREst, an algorithm for estimating the instantaneous three-dimensional spatio-spectral data cube from CASSI's two-dimensional array of encoded and compressed measurements. We utilize CASSI's snapshot ability to demonstrate a spectral image video of multi-colored candles with live flames captured at 30 frames per second.

  11. Quantitative Characterization of Super-Resolution Infrared Imaging Based on Time-Varying Focal Plane Coding

    NASA Astrophysics Data System (ADS)

    Wang, X.; Yuan, Y.; Zhang, J.; Chen, Y.; Cheng, Y.

    2014-10-01

    High resolution infrared image has been the goal of an infrared imaging system. In this paper, a super-resolution infrared imaging method using time-varying coded mask is proposed based on focal plane coding and compressed sensing theory. The basic idea of this method is to set a coded mask on the focal plane of the optical system, and the same scene could be sampled many times repeatedly by using time-varying control coding strategy, the super-resolution image is further reconstructed by sparse optimization algorithm. The results of simulation are quantitatively evaluated by introducing the Peak Signal-to-Noise Ratio (PSNR) and Modulation Transfer Function (MTF), which illustrate that the effect of compressed measurement coefficient r and coded mask resolution m on the reconstructed image quality. Research results show that the proposed method will promote infrared imaging quality effectively, which will be helpful for the practical design of new type of high resolution ! infrared imaging systems.

  12. A Probabilistic Analysis of Sparse Coded Feature Pooling and Its Application for Image Retrieval

    PubMed Central

    Zhang, Yunchao; Chen, Jing; Huang, Xiujie; Wang, Yongtian

    2015-01-01

    Feature coding and pooling as a key component of image retrieval have been widely studied over the past several years. Recently sparse coding with max-pooling is regarded as the state-of-the-art for image classification. However there is no comprehensive study concerning the application of sparse coding for image retrieval. In this paper, we first analyze the effects of different sampling strategies for image retrieval, then we discuss feature pooling strategies on image retrieval performance with a probabilistic explanation in the context of sparse coding framework, and propose a modified sum pooling procedure which can improve the retrieval accuracy significantly. Further we apply sparse coding method to aggregate multiple types of features for large-scale image retrieval. Extensive experiments on commonly-used evaluation datasets demonstrate that our final compact image representation improves the retrieval accuracy significantly. PMID:26132080

  13. Novel joint source-channel coding for wireless transmission of radiography images.

    PubMed

    Watanabe, Katsuhiro; Takizawa, Kenichi; Ikegami, Tetsushi

    2010-01-01

    A wireless technology is required to realize robust transmission of medical images like a radiography image over noisy environment. The use of error correction technique is essential for realizing such a reliable communication, in which a suitable channel coding is introduced to correct erroneous bits caused by passing through a noisy channel. However, the use of a channel code decreases its efficiency because redundancy bits are also transmitted with information bits. This paper presents a joint source-channel coding which maintains the channel efficiency during transmission of medical images like a radiography image. As medical images under the test, we use typical radiography images in this paper. The joint coding technique enjoys correlations between pixels of the radiography image. The results show that the proposed joint coding provides capability to correcting erroneous bits without increasing the redundancy of the codeword.

  14. Dynamics of fractal networks

    NASA Astrophysics Data System (ADS)

    Orbach, R.

    1986-02-01

    Random structures often exhibit fractal geometry, defined in terms of the mass scaling exponent, D, the fractal dimension. The vibrational dynamics of fractal networks are expressed in terms of the exponent d double bar, the fracton dimensionality. The eigenstates on a fractal network are spatially localized for d double bar less than or equal to 2. The implications of fractal geometry are discussed for thermal transport on fractal networks. The electron-fracton interaction is developed, with a brief outline given for the time dependence of the electronic relaxation on fractal networks. It is suggested that amorphous or glassy materials may exhibit fractal properties at short length scales or, equivalently, at high energies. The calculations of physical properties can be used to test the fractal character of the vibrational excitations in these materials.

  15. Bone fractal analysis.

    PubMed

    Feltrin, Gian Pietro; Stramare, Roberto; Miotto, Diego; Giacomini, Dario; Saccavini, Claudio

    2004-06-01

    Fractal analysis is a quantitative method used to evaluate complex anatomic findings in their elementary component. Its application to biologic images, particularly to cancellous bones, has been well practiced within the past few years. The aims of these applications are to assess changes in bone and the loss of spongious architecture, indicate bone fragility, and to show the increased risk for fracture in primary or secondary osteoporosis. The applications are very promising to help complete the studies that can define bone density (bone mineral density by dual energy x-ray absorptiometry or quantitative computed tomography), and also have the capacity to distinguish the patients with a high or low risk for fracture. Their extension to the clinical fields, to define a test for fracture risk, is still limited by difficult application to the medical quantitative imaging of bones, between correct application at superficial bones and unreliable application to deep bones. The future evolution and validity do not depend upon fractal methods but upon well-detailed imaging of the bones in clinical conditions.

  16. The validity of ICD codes coupled with imaging procedure codes for identifying acute venous thromboembolism using administrative data.

    PubMed

    Alotaibi, Ghazi S; Wu, Cynthia; Senthilselvan, Ambikaipakan; McMurtry, M Sean

    2015-08-01

    The purpose of this study was to evaluate the accuracy of using a combination of International Classification of Diseases (ICD) diagnostic codes and imaging procedure codes for identifying deep vein thrombosis (DVT) and pulmonary embolism (PE) within administrative databases. Information from the Alberta Health (AH) inpatients and ambulatory care administrative databases in Alberta, Canada was obtained for subjects with a documented imaging study result performed at a large teaching hospital in Alberta to exclude venous thromboembolism (VTE) between 2000 and 2010. In 1361 randomly-selected patients, the proportion of patients correctly classified by AH administrative data, using both ICD diagnostic codes and procedure codes, was determined for DVT and PE using diagnoses documented in patient charts as the gold standard. Of the 1361 patients, 712 had suspected PE and 649 had suspected DVT. The sensitivities for identifying patients with PE or DVT using administrative data were 74.83% (95% confidence interval [CI]: 67.01-81.62) and 75.24% (95% CI: 65.86-83.14), respectively. The specificities for PE or DVT were 91.86% (95% CI: 89.29-93.98) and 95.77% (95% CI: 93.72-97.30), respectively. In conclusion, when coupled with relevant imaging codes, VTE diagnostic codes obtained from administrative data provide a relatively sensitive and very specific method to ascertain acute VTE.

  17. Experiments in the use of fractal in computer pattern recognition

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1993-10-01

    The results of a study in the uses of fractal for the automatic detection of man made objects in infrared (IR) and millimeter wave (MMW) radar imagery are discussed in this paper. The fractal technique that is used is based on the estimation of the fractal dimensions of sequential blocks of an image of a scene and then by slicing the histogram of the computed fractal dimensions. The fractal dimension is computed by a Fourier regression approach. The technique is shown to be effective for the detection of tactical military vehicles in IR, and for the detection of airport attributes in MMW radar imagery.

  18. Fractal antenna and fractal resonator primer

    NASA Astrophysics Data System (ADS)

    Cohen, Nathan

    2015-03-01

    Self-similarity and fractals have opened new and important avenues for antenna and electronic solutions over the last 25 years. This primer provides an introduction to the benefits provided by fractal geometry in antennas, resonators, and related structures. Such benefits include, among many, wider bandwidths, smaller sizes, part-less electronic components, and better performance. Fractals also provide a new generation of optimized design tools, first used successfully in antennas but applicable in a general fashion.

  19. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  20. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  1. A new balanced modulation code for a phase-image-based holographic data storage system

    NASA Astrophysics Data System (ADS)

    John, Renu; Joseph, Joby; Singh, Kehar

    2005-08-01

    We propose a new balanced modulation code for coding data pages for phase-image-based holographic data storage systems. The new code addresses the coding subtleties associated with phase-based systems while performing a content-based search in a holographic database. The new code, which is a balanced modulation code, is a modification of the existing 8:12 modulation code, and removes the false hits that occur in phase-based content-addressable systems due to phase-pixel subtractions. We demonstrate the better performance of the new code using simulations and experiments in terms of discrimination ratio while content addressing through a holographic memory. The new code is compared with the conventional coding scheme to analyse the false hits due to subtraction of phase pixels.

  2. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension, Markov random field features, and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-10-01

    In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.

  3. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    SciTech Connect

    Zhang, F. N.; Hu, H. S. Wang, D. M.; Jia, J.; Zhang, T. K.; Jia, Q. G.

    2015-12-15

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  4. Fractal Analysis Of Colors And Shapes For Natural And Urbanscapes URBANSCAPES

    NASA Astrophysics Data System (ADS)

    Wang, J.; Ogawa, S.

    2015-04-01

    Fractal analysis has been applied in many fields since it was proposed by Mandelbrot in 1967. Fractal dimension is a basic parameter of fractal analysis. According to the difference of fractal dimensions for images, natural landscapes and urbanscapes could be differentiated, which is of great significance. In this paper, two methods were used for two types of landscape images to discuss the difference between natural landscapes and urbanscapes. Traditionally, a box-counting method was adopted to evaluate the shape of grayscale images. On the other way, for the spatial distributions of RGB values in images, the fractal Brownian motion (fBm) model was employed to calculate the fractal dimensions of colour images for two types of landscape images. From the results, the fractal dimensions of natural landscape images were lower than that of urbanscapes for both grayscale images and colour images with two types of methods. Moreover, the spatial distributions of RGB values in images were clearly related with the fractal dimensions. The results indicated that there was obvious difference (about 0.09) between the fractal dimensions for two kinds of landscapes. It was worthy to mention that when the correlation coefficient is 0 in the semivariogram, the fractal dimension is 2, which means that when the RGB values are completely random for their locations in the colour image, the fractal dimension becomes 3. Two kinds of fractal dimensions could evaluate the shape and the color distributions of landscapes and discriminate the natural landscapes from urbanscapes clearly.

  5. QR code based noise-free optical encryption and decryption of a gray scale image

    NASA Astrophysics Data System (ADS)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  6. Fractal nematic colloids

    PubMed Central

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter. PMID:28117325

  7. Fractals for physicians.

    PubMed

    Thamrin, Cindy; Stern, Georgette; Frey, Urs

    2010-06-01

    There is increasing interest in the study of fractals in medicine. In this review, we provide an overview of fractals, of techniques available to describe fractals in physiological data, and we propose some reasons why a physician might benefit from an understanding of fractals and fractal analysis, with an emphasis on paediatric respiratory medicine where possible. Among these reasons are the ubiquity of fractal organisation in nature and in the body, and how changes in this organisation over the lifespan provide insight into development and senescence. Fractal properties have also been shown to be altered in disease and even to predict the risk of worsening of disease. Finally, implications of a fractal organisation include robustness to errors during development, ability to adapt to surroundings, and the restoration of such organisation as targets for intervention and treatment. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Chaos, Fractals, and Polynomials.

    ERIC Educational Resources Information Center

    Tylee, J. Louis; Tylee, Thomas B.

    1996-01-01

    Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)

  9. Fractal nematic colloids.

    PubMed

    Hashemi, S M; Jagodič, U; Mozaffari, M R; Ejtehadi, M R; Muševič, I; Ravnik, M

    2017-01-24

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.

  10. Fractal nematic colloids

    NASA Astrophysics Data System (ADS)

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.

  11. Chaos, Fractals, and Polynomials.

    ERIC Educational Resources Information Center

    Tylee, J. Louis; Tylee, Thomas B.

    1996-01-01

    Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)

  12. Local connected fractal dimensions and lacunarity analyses of 60 degrees fluorescein angiograms.

    PubMed

    Landini, G; Murray, P I; Misson, G P

    1995-12-01

    The retinal vascular tree exhibits fractal characteristics. These findings relate to the mechanisms involved in the vascularization process and to the objective morphologic characterization of retinal vessels using fractal analysis. Although normal retinas show uniform patterns of blood vessels, in pathologic retinas with central vein or artery occlusions, the patterns are irregular. Because the generalized box fractal dimension fails to differentiate successfully between normal and abnormal retinal vessels in 60 degrees fluorescein angiograms, the authors have further investigated this problem using the local connected fractal dimension (alpha). The authors studied 24 digitized 60 degrees fluorescein angiograms of patients with normal retinas and 5 angiograms of patients with central retinal vein or artery occlusion. The pointwise method estimated the local complexity of the angiogram within a finite window centered on those pixels that belong to the retinal vessels. Color-coded dimensional images of the angiograms were constructed by plotting the pixels forming the object with a color that corresponded to specific values of alpha +/- delta alpha. The color-coded representation allowed recognition of areas with increased or decreased local angiogram complexity. The alpha distributions showed differences between normal and pathologic retinas, which overcomes problems encountered when using the methods of calculating the generalized fractal dimensions. A multivariate linear discriminant function using parameters from the alpha distribution and a further fractal parameter--lacunarity--reclassified 23 of the 24 normal and 4 of the 5 pathologic angiograms in their original groups (total: 92.1% correct). This methodology may be used for automatic detection and objective characterization of local retinal vessel abnormalities.

  13. On dependent bit allocation for multiview image coding with depth-image-based rendering.

    PubMed

    Cheung, Gene; Velisavljević, Vladan; Ortega, Antonio

    2011-11-01

    The encoding of both texture and depth maps of multiview images, captured by a set of spatially correlated cameras, is important for any 3-D visual communication system based on depth-image-based rendering (DIBR). In this paper, we address the problem of efficient bit allocation among texture and depth maps of multiview images. More specifically, suppose we are given a coding tool to encode texture and depth maps at the encoder and a view-synthesis tool to construct intermediate views at the decoder using neighboring encoded texture and depth maps. Our goal is to determine how to best select captured views for encoding and distribute available bits among texture and depth maps of selected coded views, such that the visual distortion of desired constructed views is minimized. First, in order to obtain at the encoder a low complexity estimate of the visual quality of a large number of desired synthesized views, we derive a cubic distortion model based on basic DIBR properties, whose parameters are obtained using only a small number of viewpoint samples. Then, we demonstrate that the optimal selection of coded views and quantization levels for corresponding texture and depth maps is equivalent to the shortest path in a specially constructed 3-D trellis. Finally, we show that, using the assumptions of monotonicity in the predictor's quantization level and distance, suboptimal solutions can be efficiently pruned from the feasible space during solution search. Experiments show that our proposed efficient selection of coded views and quantization levels for corresponding texture and depth maps outperforms an alternative scheme using constant quantization levels for all maps (commonly used in video standard implementations) by up to 1.5 dB. Moreover, the complexity of our scheme can be reduced by at least 80% over the full solution search.

  14. Study on GEANT4 code applications to dose calculation using imaging data

    NASA Astrophysics Data System (ADS)

    Lee, Jeong Ok; Kang, Jeong Ku; Kim, Jhin Kee; Kwon, Hyeong Cheol; Kim, Jung Soo; Kim, Bu Gil; Jeong, Dong Hyeok

    2015-07-01

    The use of the GEANT4 code has increased in the medical field. Various studies have calculated the patient dose distributions by users the GEANT4 code with imaging data. In present study, Monte Carlo simulations based on DICOM data were performed to calculate the dose absorb in the patient's body. Various visualization tools are installed in the GEANT4 code to display the detector construction; however, the display of DICOM images is limited. In addition, to displaying the dose distributions on the imaging data of the patient is difficult. Recently, the gMocren code, a volume visualization tool for GEANT4 simulation, was developed and has been used in volume visualization of image files. In this study, the imaging based on the dose distributions absorbed in the patients was performed by using the gMocren code. Dosimetric evaluations with were carried out by using thermo luminescent dosimeter and film dosimetry to verify the calculated results.

  15. Fractal interpretation of intermittency

    SciTech Connect

    Hwa, R.C.

    1991-12-01

    Implication of intermittency in high-energy collisions is first discussed. Then follows a description of the fractal interpretation of intermittency. A basic quantity with asymptotic fractal behavior is introduced. It is then shown how the factorial moments and the G moments can be expressed in terms of it. The relationship between the intermittency indices and the fractal indices is made explicit.

  16. Fractals in the Classroom

    ERIC Educational Resources Information Center

    Fraboni, Michael; Moller, Trisha

    2008-01-01

    Fractal geometry offers teachers great flexibility: It can be adapted to the level of the audience or to time constraints. Although easily explained, fractal geometry leads to rich and interesting mathematical complexities. In this article, the authors describe fractal geometry, explain the process of iteration, and provide a sample exercise.…

  17. Fractals in the Classroom

    ERIC Educational Resources Information Center

    Fraboni, Michael; Moller, Trisha

    2008-01-01

    Fractal geometry offers teachers great flexibility: It can be adapted to the level of the audience or to time constraints. Although easily explained, fractal geometry leads to rich and interesting mathematical complexities. In this article, the authors describe fractal geometry, explain the process of iteration, and provide a sample exercise.…

  18. Generalized fragmentation functions for fractal jet observables

    NASA Astrophysics Data System (ADS)

    Elder, Benjamin T.; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J.; Zhou, Kevin

    2017-06-01

    We introduce a broad class of fractal jet observables that recursively probe the collective properties of hadrons produced in jet fragmentation. To describe these collinear-unsafe observables, we generalize the formalism of fragmentation functions, which are important objects in QCD for calculating cross sections involving identified final-state hadrons. Fragmentation functions are fundamentally nonperturbative, but have a calculable renormalization group evolution. Unlike ordinary fragmentation functions, generalized fragmentation functions exhibit nonlinear evolution, since fractal observables involve correlated subsets of hadrons within a jet. Some special cases of generalized fragmentation functions are reviewed, including jet charge and track functions. We then consider fractal jet observables that are based on hierarchical clustering trees, where the nonlinear evolution equations also exhibit tree-like structure at leading order. We develop a numeric code for performing this evolution and study its phenomenological implications. As an application, we present examples of fractal jet observables that are useful in discriminating quark jets from gluon jets.

  19. Snapshot 2D tomography via coded aperture x-ray scatter imaging

    PubMed Central

    MacCabe, Kenneth P.; Holmgren, Andrew D.; Tornai, Martin P.; Brady, David J.

    2015-01-01

    This paper describes a fan beam coded aperture x-ray scatter imaging system which acquires a tomographic image from each snapshot. This technique exploits cylindrical symmetry of the scattering cross section to avoid the scanning motion typically required by projection tomography. We use a coded aperture with a harmonic dependence to determine range, and a shift code to determine cross-range. Here we use a forward-scatter configuration to image 2D objects and use serial exposures to acquire tomographic video of motion within a plane. Our reconstruction algorithm also estimates the angular dependence of the scattered radiance, a step toward materials imaging and identification. PMID:23842254

  20. Fractal-based wideband invisibility cloak

    NASA Astrophysics Data System (ADS)

    Cohen, Nathan; Okoro, Obinna; Earle, Dan; Salkind, Phil; Unger, Barry; Yen, Sean; McHugh, Daniel; Polterzycki, Stefan; Shelman-Cohen, A. J.

    2015-03-01

    A wideband invisibility cloak (IC) at microwave frequencies is described. Using fractal resonators in closely spaced (sub wavelength) arrays as a minimal number of cylindrical layers (rings), the IC demonstrates that it is physically possible to attain a `see through' cloaking device with: (a) wideband coverage; (b) simple and attainable fabrication; (c) high fidelity emulation of the free path; (d) minimal side scattering; (d) a near absence of shadowing in the scattering. Although not a practical device, this fractal-enabled technology demonstrator opens up new opportunities for diverted-image (DI) technology and use of fractals in wideband optical, infrared, and microwave applications.

  1. Characterizing Hyperspectral Imagery (AVIRIS) Using Fractal Technique

    NASA Technical Reports Server (NTRS)

    Qiu, Hong-Lie; Lam, Nina Siu-Ngan; Quattrochi, Dale

    1997-01-01

    With the rapid increase in hyperspectral data acquired by various experimental hyperspectral imaging sensors, it is necessary to develop efficient and innovative tools to handle and analyze these data. The objective of this study is to seek effective spatial analytical tools for summarizing the spatial patterns of hyperspectral imaging data. In this paper, we (1) examine how fractal dimension D changes across spectral bands of hyperspectral imaging data and (2) determine the relationships between fractal dimension and image content. It has been documented that fractal dimension changes across spectral bands for the Landsat-TM data and its value [(D)] is largely a function of the complexity of the landscape under study. The newly available hyperspectral imaging data such as that from the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) which has 224 bands, covers a wider spectral range with a much finer spectral resolution. Our preliminary result shows that fractal dimension values of AVIRIS scenes from the Santa Monica Mountains in California vary between 2.25 and 2.99. However, high fractal dimension values (D > 2.8) are found only from spectral bands with high noise level and bands with good image quality have a fairly stable dimension value (D = 2.5 - 2.6). This suggests that D can also be used as a summary statistics to represent the image quality or content of spectral bands.

  2. Characterizing Hyperspectral Imagery (AVIRIS) Using Fractal Technique

    NASA Technical Reports Server (NTRS)

    Qiu, Hong-Lie; Lam, Nina Siu-Ngan; Quattrochi, Dale

    1997-01-01

    With the rapid increase in hyperspectral data acquired by various experimental hyperspectral imaging sensors, it is necessary to develop efficient and innovative tools to handle and analyze these data. The objective of this study is to seek effective spatial analytical tools for summarizing the spatial patterns of hyperspectral imaging data. In this paper, we (1) examine how fractal dimension D changes across spectral bands of hyperspectral imaging data and (2) determine the relationships between fractal dimension and image content. It has been documented that fractal dimension changes across spectral bands for the Landsat-TM data and its value [(D)] is largely a function of the complexity of the landscape under study. The newly available hyperspectral imaging data such as that from the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) which has 224 bands, covers a wider spectral range with a much finer spectral resolution. Our preliminary result shows that fractal dimension values of AVIRIS scenes from the Santa Monica Mountains in California vary between 2.25 and 2.99. However, high fractal dimension values (D > 2.8) are found only from spectral bands with high noise level and bands with good image quality have a fairly stable dimension value (D = 2.5 - 2.6). This suggests that D can also be used as a summary statistics to represent the image quality or content of spectral bands.

  3. A Spherical Active Coded Aperture for 4π Gamma-ray Imaging

    DOE PAGES

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...

    2017-09-22

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  4. Coded illumination for motion-blur free imaging of cells on cell-phone based imaging flow cytometer

    NASA Astrophysics Data System (ADS)

    Saxena, Manish; Gorthi, Sai Siva

    2014-10-01

    Cell-phone based imaging flow cytometry can be realized by flowing cells through the microfluidic devices, and capturing their images with an optically enhanced camera of the cell-phone. Throughput in flow cytometers is usually enhanced by increasing the flow rate of cells. However, maximum frame rate of camera system limits the achievable flow rate. Beyond this, the images become highly blurred due to motion-smear. We propose to address this issue with coded illumination, which enables recovery of high-fidelity images of cells far beyond their motion-blur limit. This paper presents simulation results of deblurring the synthetically generated cell/bead images under such coded illumination.

  5. Multiplexing of encrypted data using fractal masks.

    PubMed

    Barrera, John F; Tebaldi, Myrian; Amaya, Dafne; Furlan, Walter D; Monsoriu, Juan A; Bolognini, Néstor; Torroba, Roberto

    2012-07-15

    In this Letter, we present to the best of our knowledge a new all-optical technique for multiple-image encryption and multiplexing, based on fractal encrypting masks. The optical architecture is a joint transform correlator. The multiplexed encrypted data are stored in a photorefractive crystal. The fractal parameters of the key can be easily tuned to lead to a multiplexing operation without cross talk effects. Experimental results that support the potential of the method are presented.

  6. A comparison of the fractal and JPEG algorithms

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Shahshahani, M.

    1991-01-01

    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.

  7. Joint source-channel coding: secured and progressive transmission of compressed medical images on the Internet.

    PubMed

    Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Coat, Véronique

    2008-06-01

    The joint source-channel coding system proposed in this paper has two aims: lossless compression with a progressive mode and the integrity of medical data, which takes into account the priorities of the image and the properties of a network with no guaranteed quality of service. In this context, the use of scalable coding, locally adapted resolution (LAR) and a discrete and exact Radon transform, known as the Mojette transform, meets this twofold requirement. In this paper, details of this joint coding implementation are provided as well as a performance evaluation with respect to the reference CALIC coding and to unequal error protection using Reed-Solomon codes.

  8. Reduction of artefacts and noise for a wavefront coding athermalized infrared imaging system

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Zhang, Xiaodong; Shi, Zelin; Xu, Baoshu; Zhang, Chengshuo

    2016-07-01

    Because of obvious drawbacks including serious artefacts and noise in a decoded image, the existing wavefront coding infrared imaging systems are seriously restricted in application. The proposed ultra-precision diamond machining technique manufactures an optical phase mask with a form manufacturing errors of approximately 770 nm and a surface roughness value Ra of 5.44 nm. The proposed decoding method outperforms the classical Wiener filtering method in three indices of mean square errors, mean structural similarity index and noise equivalent temperature difference. Based on the results mentioned above and a basic principle of wavefront coding technique, this paper further develops a wavefront coding infrared imaging system. Experimental results prove that our wavefront coding infrared imaging system yields a decoded image with good quality over a temperature range from -40 °C to +70 °C.

  9. Quantitative evaluation of midpalatal suture maturation via fractal analysis

    PubMed Central

    Kwak, Kyoung Ho; Kim, Yong-Il; Kim, Yong-Deok

    2016-01-01

    Objective The purpose of this study was to determine whether the results of fractal analysis can be used as criteria for midpalatal suture maturation evaluation. Methods The study included 131 subjects aged over 18 years of age (range 18.1–53.4 years) who underwent cone-beam computed tomography. Skeletonized images of the midpalatal suture were obtained via image processing software and used to calculate fractal dimensions. Correlations between maturation stage and fractal dimensions were calculated using Spearman's correlation coefficient. Optimal fractal dimension cut-off values were determined using a receiver operating characteristic curve. Results The distribution of maturation stages of the midpalatal suture according to the cervical vertebrae maturation index was highly variable, and there was a strong negative correlation between maturation stage and fractal dimension (−0.623, p < 0.001). Fractal dimension was a statistically significant indicator of dichotomous results with regard to maturation stage (area under curve = 0.794, p < 0.001). A test in which fractal dimension was used to predict the resulting variable that splits maturation stages into ABC and D or E yielded an optimal fractal dimension cut-off value of 1.0235. Conclusions There was a strong negative correlation between fractal dimension and midpalatal suture maturation. Fractal analysis is an objective quantitative method, and therefore we suggest that it may be useful for the evaluation of midpalatal suture maturation. PMID:27668195

  10. 110 °C range athermalization of wavefront coding infrared imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong

    2017-09-01

    110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.

  11. Wave-front coded optical readout for the MEMS-based uncooled infrared imaging system

    NASA Astrophysics Data System (ADS)

    Li, Tian; Zhao, Yuejin; Dong, Liquan; Liu, Xiaohua; Jia, Wei; Hui, Mei; Yu, Xiaomei; Gong, Cheng; Liu, Weiyu

    2012-11-01

    In the space limited infrared imaging system based MEMS, the adjustment of optical readout part is inconvenient. This paper proposed a method of wave-front coding to extend the depth of focus/field of the optical readout system, to solve the problem above, and to reduce the demanding for precision in processing and assemblage of the optical readout system itself as well. The wave-front coded imaging system consists of optical coding and digital decoding. By adding a CPM (Cubic Phase Mask) on the pupil plane, it becomes non-sensitive to defocussing within an extended range. The system has similar PSFs and almost equally blurred intermediate images can be obtained. Sharp images are supposed to be acquired based on image restoration algorithms, with the same PSF as a decoding core. We studied the conventional optical imaging system, which had the same optical performance with the wave-front coding one for comparing. Analogue imaging experiments were carried out. And one PSF was used as a simple direct inverse filter, for imaging restoration. Relatively sharp restored images were obtained. Comparatively, the analogue defocussing images of the conventional system were badly destroyed. Using the decrease of the MTF as a standard, we found the depth of focus/field of the wave-front coding system had been extended significantly.

  12. After notes on self-similarity exponent for fractal structures

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, Manuel; Caravaca Garratón, Manuel

    2017-06-01

    Previous works have highlighted the suitability of the concept of fractal structure, which derives from asymmetric topology, to propound generalized definitions of fractal dimension. The aim of the present article is to collect some results and approaches allowing to connect the self-similarity index and the fractal dimension of a broad spectrum of random processes. To tackle with, we shall use the concept of induced fractal structure on the image set of a sample curve. The main result in this paper states that given a sample function of a random process endowed with the induced fractal structure on its image, it holds that the self-similarity index of that function equals the inverse of its fractal dimension.

  13. [Fractal analysis in the diagnosis of breast tumors].

    PubMed

    Crişan, D A; Lesaru, M; Dobrescu, R; Vasilescu, C

    2007-01-01

    Last years studies made by researchers from over the world show that fractal geometry is a viable alternative for image analysis. Fractal features of natural forms give to fractal analysis new valences in various fields, medical imaging being a very important one. This paper intend to prove that fractal dimension, as a way to characterize the complexity of a form, can be used for diagnosis of mammographic lesions classified BI-RADS 4, further investigations being not necessary. The experiments made on 30 cases classified BI-RADS 4 confirmed that 89% of benign lesions have an average fractal dimension under the threshold 1.4, meanwhile malign lesions are characterized, in a similar percentage, by an average fractal dimension over that threshold.

  14. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  15. A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy

    NASA Technical Reports Server (NTRS)

    Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.

    1985-01-01

    A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.

  16. Wavelet versus JPEG (Joint Photographic Expert Group) and fractal compression. Impact on the detection of low-contrast details in computed radiographs.

    PubMed

    Ricke, J; Maass, P; Lopez Hänninen, E; Liebig, T; Amthauer, H; Stroszczynski, C; Schauer, W; Boskamp, T; Wolf, M

    1998-08-01

    The aim of this study was to evaluate different lossy image compression algorithms in direct comparison. Computed radiographs were reviewed after compression with Wavelet, Fractal, and Joint Photographic Expert Group (JPEG) algorithms. For receiver operating characteristic (ROC) analysis, 54 thoracic computed radiographs (31 showing pulmonary nodules) were compressed with a ratio of 1:60. Five images of a test-phantom were coded at 1:13. All images were reviewed on a PC. Uncompressed images were reviewed at a PC and at a radiologic workstation (with image processing). For thorax images, decrease of diagnostic accuracy was significant with Wavelets. Fractal performed worse than Wavelets. No ROC curve was observed for JPEG due to poor image quality. No diagnostic loss was noted comparing PC and Workstation review. For low-contrast details of the phantom, results of Wavelet compression were equal to uncompressed images. Fewer true positives and increased true negatives were noted with Wavelets though. Wavelets were superior to JPEG, and JPEG images were superior to Fractal. Workstation review was superior to PC review. Only Wavelets provided accurate review of low-contrast details at a compression of 1:13. Frequency filtering of Wavelets affects contrast even at a low compression ratio. JPEG performed better than Fractal at low and worse at high compression ratio.

  17. Iterons, fractals and computations of automata

    NASA Astrophysics Data System (ADS)

    Siwak, Paweł

    1999-03-01

    Processing of strings by some automata, when viewed on space-time (ST) diagrams, reveals characteristic soliton-like coherent periodic objects. They are inherently associated with iterations of automata mappings thus we call them the iterons. In the paper we present two classes of one-dimensional iterons: particles and filtrons. The particles are typical for parallel (cellular) processing, while filtrons, introduced in (32) are specific for serial processing of strings. In general, the images of iterated automata mappings exhibit not only coherent entities but also the fractals, and quasi-periodic and chaotic dynamics. We show typical images of such computations: fractals, multiplication by a number, and addition of binary numbers defined by a Turing machine. Then, the particles are presented as iterons generated by cellular automata in three computations: B/U code conversion (13, 29), majority classification (9), and in discrete version of the FPU (Fermi-Pasta-Ulam) dynamics (7, 23). We disclose particles by a technique of combinational recoding of ST diagrams (as opposed to sequential recoding). Subsequently, we recall the recursive filters based on FCA (filter cellular automata) window operators, and considered by Park (26), Ablowitz (1), Fokas (11), Fuchssteiner (12), Bruschi (5) and Jiang (20). We present the automata equivalents to these filters (33). Some of them belong to the class of filter automata introduced in (30). We also define and illustrate some properties of filtrons. Contrary to particles, the filtrons interact nonlocally in the sense that distant symbols may influence one another. Thus their interactions are very unusual. Some examples have been given in (32). Here we show new examples of filtron phenomena: multifiltron solitonic collisions, attracting and repelling filtrons, trapped bouncing filtrons (which behave like a resonance cavity) and quasi filtrons.

  18. Fast synchronization recovery for lossy image transmission with a suffix-rich Huffman code

    NASA Astrophysics Data System (ADS)

    Yang, Te-Chung; Kuo, C.-C. Jay

    1998-10-01

    A new entropy codec, which can recover quickly from the loss of synchronization due to the occurrence of transmission errors, is proposed and applied to wireless image transmission in this research. This entropy codec is designed based on the Huffman code with a careful choice of the assignment of 1's and 0's to each branch of the Huffman tree. The design satisfies the suffix-rich property, i.e. the number of a codeword to be the suffix of other codewords is maximized. After the Huffman coding tree is constructed, the source can be coded by using the traditional Huffman code. Thus, this coder does not introduce any overhead to sacrifice its coding efficiency. Statistically, the decoder can automatically recover the lost synchronization with the shortest error propagation length. Experimental results show that fast synchronization recovery reduces quality degradation on the reconstructed image while maintaining the same coding efficiency.

  19. Experimental implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    NASA Astrophysics Data System (ADS)

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2015-03-01

    A fast and accurate scatter imaging technique to differentiate cancerous and healthy breast tissue is introduced in this work. Such a technique would have wide-ranging clinical applications from intra-operative margin assessment to breast cancer screening. Coherent Scatter Computed Tomography (CSCT) has been shown to differentiate cancerous from healthy tissue, but the need to raster scan a pencil beam at a series of angles and slices in order to reconstruct 3D images makes it prohibitively time consuming. In this work we apply the coded aperture coherent scatter spectral imaging technique to reconstruct 3D images of breast tissue samples from experimental data taken without the rotation usually required in CSCT. We present our experimental implementation of coded aperture scatter imaging, the reconstructed images of the breast tissue samples and segmentations of the 3D images in order to identify the cancerous and healthy tissue inside of the samples. We find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside of them. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside of ex vivo samples within a time on the order of a minute.

  20. Imaging with the coded aperture gamma-ray spectrometer SPI aboard INTEGRAL

    NASA Astrophysics Data System (ADS)

    Wunderer, Cornelia B.; Strong, Andrew W.; Attie, David; von Ballmoos, Peter; Connell, Paul; Cordier, Bertrand; Diehl, Roland; Hammer, J. Wolfgang; Jean, Pierre; von Kienlin, Andreas; Knoedlseder, Juergen; Lichti, Giselher G.; Mandrou, Pierre; Paul, Jaques; Paul, Philippe; Reglero, Victor; Roques, Jean-Pierre; Sanchez, Filomeno; Schanne, Stephane; Schoenfelder, Volker; Shrader, Chris; Skinner, Gerald K.; Sturner, Steven J.; Teegarden, Bonnard J.; Vedrenne, Gilbert; Weidenspointner, Georg

    2003-03-01

    ESA's INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) will be launched in October 2002. Its two main instruments are the imager IBIS and the spectrometer SPI. Both emply coded apertures to obtain directional information on the incoming radiation. SPI's detection plane consists of 19 hexagonal Ge detectors, its coded aperture has 63 tungsten-alloy elements of 30 mm thickness.

  1. Fractal analysis of DNA sequence data

    SciTech Connect

    Berthelsen, C.L.

    1993-01-01

    DNA sequence databases are growing at an almost exponential rate. New analysis methods are needed to extract knowledge about the organization of nucleotides from this vast amount of data. Fractal analysis is a new scientific paradigm that has been used successfully in many domains including the biological and physical sciences. Biological growth is a nonlinear dynamic process and some have suggested that to consider fractal geometry as a biological design principle may be most productive. This research is an exploratory study of the application of fractal analysis to DNA sequence data. A simple random fractal, the random walk, is used to represent DNA sequences. The fractal dimension of these walks is then estimated using the [open quote]sandbox method[close quote]. Analysis of 164 human DNA sequences compared to three types of control sequences (random, base-content matched, and dimer-content matched) reveals that long-range correlations are present in DNA that are not explained by base or dimer frequencies. The study also revealed that the fractal dimension of coding sequences was significantly lower than sequences that were primarily noncoding, indicating the presence of longer-range correlations in functional sequences. The multifractal spectrum is used to analyze fractals that are heterogeneous and have a different fractal dimension for subsets with different scalings. The multifractal spectrum of the random walks of twelve mitochondrial genome sequences was estimated. Eight vertebrate mtDNA sequences had uniformly lower spectra values than did four invertebrate mtDNA sequences. Thus, vertebrate mitochondria show significantly longer-range correlations than to invertebrate mitochondria. The higher multifractal spectra values for invertebrate mitochondria suggest a more random organization of the sequences. This research also includes considerable theoretical work on the effects of finite size, embedding dimension, and scaling ranges.

  2. Large-field electron imaging and X-ray elemental mapping unveil the morphology, structure, and fractal features of a Cretaceous fossil at the centimeter scale.

    PubMed

    Oliveira, Naiara C; Silva, João H; Barros, Olga A; Pinheiro, Allysson P; Santana, William; Saraiva, Antônio A F; Ferreira, Odair P; Freire, Paulo T C; Paula, Amauri J

    2015-10-06

    We used here a scanning electron microscopy approach that detected backscattered electrons (BSEs) and X-rays (from ionization processes) along a large-field (LF) scan, applied on a Cretaceous fossil of a shrimp (area ∼280 mm(2)) from the Araripe Sedimentary Basin. High-definition LF images from BSEs and X-rays were essentially generated by assembling thousands of magnified images that covered the whole area of the fossil, thus unveiling morphological and compositional aspects at length scales from micrometers to centimeters. Morphological features of the shrimp such as pleopods, pereopods, and antennae located at near-surface layers (undetected by photography techniques) were unveiled in detail by LF BSE images and in calcium and phosphorus elemental maps (mineralized as hydroxyapatite). LF elemental maps for zinc and sulfur indicated a rare fossilization event observed for the first time in fossils from the Araripe Sedimentary Basin: the mineralization of zinc sulfide interfacing to hydroxyapatite in the fossil. Finally, a dimensional analysis of the phosphorus map led to an important finding: the existence of a fractal characteristic (D = 1.63) for the hydroxyapatite-matrix interface, a result of physical-geological events occurring with spatial scale invariance on the specimen, over millions of years.

  3. Edges of Saturn's rings are fractal.

    PubMed

    Li, Jun; Ostoja-Starzewski, Martin

    2015-01-01

    The images recently sent by the Cassini spacecraft mission (on the NASA website http://saturn.jpl.nasa.gov/photos/halloffame/) show the complex and beautiful rings of Saturn. Over the past few decades, various conjectures were advanced that Saturn's rings are Cantor-like sets, although no convincing fractal analysis of actual images has ever appeared. Here we focus on four images sent by the Cassini spacecraft mission (slide #42 "Mapping Clumps in Saturn's Rings", slide #54 "Scattered Sunshine", slide #66 taken two weeks before the planet's Augus't 200'9 equinox, and slide #68 showing edge waves raised by Daphnis on the Keeler Gap) and one image from the Voyager 2' mission in 1981. Using three box-counting methods, we determine the fractal dimension of edges of rings seen here to be consistently about 1.63 ~ 1.78. This clarifies in what sense Saturn's rings are fractal.

  4. Unique identification code for medical fundus images using blood vessel pattern for tele-ophthalmology applications.

    PubMed

    Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar

    2016-10-01

    Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Exploring Fractals in the Classroom.

    ERIC Educational Resources Information Center

    Naylor, Michael

    1999-01-01

    Describes an activity involving six investigations. Introduces students to fractals, allows them to study the properties of some famous fractals, and encourages them to create their own fractal artwork. Contains 14 references. (ASK)

  6. Exploring Fractals in the Classroom.

    ERIC Educational Resources Information Center

    Naylor, Michael

    1999-01-01

    Describes an activity involving six investigations. Introduces students to fractals, allows them to study the properties of some famous fractals, and encourages them to create their own fractal artwork. Contains 14 references. (ASK)

  7. Fractals: To Know, to Do, to Simulate.

    ERIC Educational Resources Information Center

    Talanquer, Vicente; Irazoque, Glinda

    1993-01-01

    Discusses the development of fractal theory and suggests fractal aggregates as an attractive alternative for introducing fractal concepts. Describes methods for producing metallic fractals and a computer simulation for drawing fractals. (MVL)

  8. Fractals: To Know, to Do, to Simulate.

    ERIC Educational Resources Information Center

    Talanquer, Vicente; Irazoque, Glinda

    1993-01-01

    Discusses the development of fractal theory and suggests fractal aggregates as an attractive alternative for introducing fractal concepts. Describes methods for producing metallic fractals and a computer simulation for drawing fractals. (MVL)

  9. Fractals in the Classroom

    NASA Astrophysics Data System (ADS)

    Knutson, Paul; Dahlberg, E. Dan

    2003-10-01

    In examples of fractals such as moon craters, rivers,2 cauliflower,3 and bread,4 the actual growth process of the fractal object is missed. In the simple experiment described here, one can observe and record the growth of calcium carbonate crystals — a ubiquitous material found in marble and seashells — in real time. The video frames can be digitized and analyzed to determine the fractal dimension.

  10. Hands-On Fractals and the Unexpected in Mathematics

    ERIC Educational Resources Information Center

    Gluchoff, Alan

    2006-01-01

    This article describes a hands-on project in which unusual fractal images are produced using only a photocopy machine and office supplies. The resulting images are an example of the contraction mapping principle.

  11. Hands-On Fractals and the Unexpected in Mathematics

    ERIC Educational Resources Information Center

    Gluchoff, Alan

    2006-01-01

    This article describes a hands-on project in which unusual fractal images are produced using only a photocopy machine and office supplies. The resulting images are an example of the contraction mapping principle.

  12. Protocol for carrying two-layer coded moving images

    NASA Astrophysics Data System (ADS)

    Ghanbari, M.; Azari, J.; Sarantopoulos, P.

    1995-05-01

    A modified version of the Orwell protocol is presented. While the presented method preserves all the basic features, it introduces the concept of the SURPLUS cell. These surplus cells enhance the performance of the network for class-2 traffic as well as those of the enhancement data in two-layer coded video by reducing the queuing delays. It is shown that the new method improves delay characteristics of the real-time services under mid-to-high traffic loads.

  13. Fractal Geometry of Architecture

    NASA Astrophysics Data System (ADS)

    Lorenz, Wolfgang E.

    In Fractals smaller parts and the whole are linked together. Fractals are self-similar, as those parts are, at least approximately, scaled-down copies of the rough whole. In architecture, such a concept has also been known for a long time. Not only architects of the twentieth century called for an overall idea that is mirrored in every single detail, but also Gothic cathedrals and Indian temples offer self-similarity. This study mainly focuses upon the question whether this concept of self-similarity makes architecture with fractal properties more diverse and interesting than Euclidean Modern architecture. The first part gives an introduction and explains Fractal properties in various natural and architectural objects, presenting the underlying structure by computer programmed renderings. In this connection, differences between the fractal, architectural concept and true, mathematical Fractals are worked out to become aware of limits. This is the basis for dealing with the problem whether fractal-like architecture, particularly facades, can be measured so that different designs can be compared with each other under the aspect of fractal properties. Finally the usability of the Box-Counting Method, an easy-to-use measurement method of Fractal Dimension is analyzed with regard to architecture.

  14. An improved coding technique for image encryption and key management

    NASA Astrophysics Data System (ADS)

    Wu, Xu; Ma, Jie; Hu, Jiasheng

    2005-02-01

    An improved chaotic algorithm for image encryption on the basis of conventional chaotic encryption algorithm is proposed. Two keys are presented in our technique. One is called private key, which is fixed and protected in the system. The other is named assistant key, which is public and transferred with the encrypted image together. For different original image, different assistant key should be chosen so that one could get different encrypted key. The updated encryption algorithm not only can resist a known-plaintext attack, but also offers an effective solution for key management. The analyses and the computer simulations show that the security is improved greatly, and can be easily realized with hardware.

  15. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  16. A Contourlet-Based Embedded Image Coding Scheme on Low Bit-Rate

    NASA Astrophysics Data System (ADS)

    Song, Haohao; Yu, Songyu

    Contourlet transform (CT) is a new image representation method, which can efficiently represent contours and textures in images. However, CT is a kind of overcomplete transform with a redundancy factor of 4/3. If it is applied to image compression straightforwardly, the encoding bit-rate may increase to meet a given distortion. This fact baffles the coding community to develop CT-based image compression techniques with satisfactory performance. In this paper, we analyze the distribution of significant contourlet coefficients in different subbands and propose a new contourlet-based embedded image coding (CEIC) scheme on low bit-rate. The well-known wavelet-based embedded image coding (WEIC) algorithms such as EZW, SPIHT and SPECK can be easily integrated into the proposed scheme by constructing a virtual low frequency subband, modifying the coding framework of WEIC algorithms according to the structure of contourlet coefficients, and adopting a high-efficiency significant coefficient scanning scheme for CEIC scheme. The proposed CEIC scheme can provide an embedded bit-stream, which is desirable in heterogeneous networks. Our experiments demonstrate that the proposed scheme can achieve the better compression performance on low bit-rate. Furthermore, thanks to the contourlet adopted in the proposed scheme, more contours and textures in the coded images are preserved to ensure the superior subjective quality.

  17. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  18. Relation between degree of polarization and Pauli color coded image to characterize scattering mechanisms

    NASA Astrophysics Data System (ADS)

    Maitra, Sanjit; Gartley, Michael G.; Kerekes, John P.

    2012-06-01

    Polarimetric image classification is sensitive to object orientation and scattering properties. This paper is a preliminary step to bridge the gap between visible wavelength polarimetric imaging and polarimetric SAR (POLSAR) imaging scattering mechanisms. In visible wavelength polarimetric imaging, the degree of linear polarization (DOLP) is widely used to represent the polarized component of the wave scattered from the objects in the scene. For Polarimetric SAR image representation, the Pauli color coding is used, which is based on linear combinations of scattering matrix elements. This paper presents a relation between DOLP and the Pauli decomposition components from the color coded Pauli reconstructed image based on laboratory measurements and first principle physics based image simulations. The objects in the scene are selected in such a way that it captures the three major scattering mechanisms such as the single or odd bounce, double or even bounce and volume scattering. The comparison is done between visible passive polarimetric imaging, active visible polarimetric imaging and active radio frequency POLSAR. The DOLP images are compared with the Pauli Color coded image with |HH-VV|, |HV|, |HH +VV| as the RGB channels. From the images, it is seen that the regions with high DOLP values showed high values of the HH component. This means the Pauli color coded image showed comparatively higher value of HH component for higher DOLP compared to other polarimetric components implying double bounce reflection. The comparison of the scattering mechanisms will help to create a synergy between POLSAR and visible wavelength polarimetric imaging and the idea can be further extended for image fusion.

  19. Fractal dimension analyses of lava surfaces and flow boundaries

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.

    1993-01-01

    An improved method of estimating fractal surface dimensions has been developed. The accuracy of this method is illustrated using artificially generated fractal surfaces. A slightly different from usual concept of linear dimension is developed, allowing a direct link between that and the corresponding surface dimension estimate. These methods are applied to a series of images of lava flows, representing a variety of physical and chemical conditions. These include lavas from California, Idaho, and Hawaii, as well as some extraterrestrial flows. The fractal surface dimension estimations are presented, as well as the fractal line dimensions where appropriate.

  20. GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION

    NASA Astrophysics Data System (ADS)

    Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič

    2015-10-01

    In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.

  1. Restoring wavefront coded iris image through the optical parameter and regularization filter

    NASA Astrophysics Data System (ADS)

    Li, Yingjiao; He, Yuqing; Feng, Guangqin; Hou, Yushi; Liu, Yong

    2011-11-01

    Wavefront coding technology can extend the depth of field of the iris imaging system, but the iris image obtained through the system is coded and blurred and can't be used for the recognition algorithm directly. The paper presents a fuzzy iris image restoration method used in the wavefront coding system. After the restoration, the images can be used for the following processing. Firstly, the wavefront coded imaging system is simulated and the optical parameter is analyzed, through the simulation we can get the system's point spread function (PSF). Secondly, using the blurred iris image and PSF to do a blind restoration to estimate a appropriate PSFe. Finally, based on the return value PSFe of PSF, applying the regularization filter on the blurred image. Experimental results show that the proposed method is simple and has fast processing speed. Compared with the traditional restoration algorithms of Wiener filtering and Lucy-Richardson filtering, the recovery image that got through the regularization filtering is the most similar with the original iris image.

  2. Automatic detection of microcalcifications with multi-fractal spectrum.

    PubMed

    Ding, Yong; Dai, Hang; Zhang, Hang

    2014-01-01

    For improving the detection of micro-calcifications (MCs), this paper proposes an automatic detection of MC system making use of multi-fractal spectrum in digitized mammograms. The approach of automatic detection system is based on the principle that normal tissues possess certain fractal properties which change along with the presence of MCs. In this system, multi-fractal spectrum is applied to reveal such fractal properties. By quantifying the deviations of multi-fractal spectrums between normal tissues and MCs, the system can identify MCs altering the fractal properties and finally locate the position of MCs. The performance of the proposed system is compared with the leading automatic detection systems in a mammographic image database. Experimental results demonstrate that the proposed system is statistically superior to most of the compared systems and delivers a superior performance.

  3. Investigations of human EEG response to viewing fractal patterns.

    PubMed

    Hagerhall, Caroline M; Laike, Thorbjörn; Taylor, Richard P; Küller, Marianne; Küller, Rikard; Martin, Theodore P

    2008-01-01

    Owing to the prevalence of fractal patterns in natural scenery and their growing impact on cultures around the world, fractals constitute a common feature of our daily visual experiences, raising an important question: what responses do fractals induce in the observer? We monitored subjects' EEG while they were viewing fractals with different fractal dimensions, and the results show that significant effects could be found in the EEG even by employing relatively simple silhouette images. Patterns with a fractal dimension of 1.3 elicited the most interesting EEG, with the highest alpha in the frontal lobes but also the highest beta in the parietal area, pointing to a complicated interplay between different parts of the brain when experiencing this pattern.

  4. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-05-27

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Study of Optical Properties on Fractal Aggregation Using the GMM Method by Different Cluster Parameters

    NASA Astrophysics Data System (ADS)

    Chang, Kuo-En; Lin, Tang-Huang; Lien, Wei-Hung

    2015-04-01

    Anthropogenic pollutants or smoke from biomass burning contribute significantly to global particle aggregation emissions, yet their aggregate formation and resulting ensemble optical properties are poorly understood and parameterized in climate models. Particle aggregation refers to formation of clusters in a colloidal suspension. In clustering algorithms, many parameters, such as fractal dimension, number of monomers, radius of monomer, and refractive index real part and image part, will alter the geometries and characteristics of the fractal aggregation and change ensemble optical properties further. The cluster-cluster aggregation algorithm (CCA) is used to specify the geometries of soot and haze particles. In addition, the Generalized Multi-particle Mie (GMM) method is utilized to compute the Mie solution from a single particle to the multi particle case. This computer code for the calculation of the scattering by an aggregate of spheres in a fixed orientation and the experimental data have been made publicly available. This study for the model inputs of optical determination of the monomer radius, the number of monomers per cluster, and the fractal dimension is presented. The main aim in this study is to analyze and contrast several parameters of cluster aggregation aforementioned which demonstrate significant differences of optical properties using the GMM method finally. Keywords: optical properties, fractal aggregation, GMM, CCA

  6. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  7. Method of Digital Hologram Coding-Decoding and Holographic Image Processing Based on the Gabor Wavelet

    NASA Astrophysics Data System (ADS)

    Kozlova, A. S.

    2016-02-01

    Special features of an algorithm for coding-decoding of digital particle holograms and restoration of holographic particle images based on the Gabor wavelet are considered. The method involves the application of the decoded wavelet coefficients for the subsequent restoration of images from digital holograms. Results of approbation of the method to numerically calculated holograms and holograms of plankton particles are presented.

  8. Image gathering and coding for digital restoration: Information efficiency and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar

    1989-01-01

    Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.

  9. Fractal analysis of yeast cell optical speckle

    NASA Astrophysics Data System (ADS)

    Flamholz, A.; Schneider, P. S.; Subramaniam, R.; Wong, P. K.; Lieberman, D. H.; Cheung, T. D.; Burgos, J.; Leon, K.; Romero, J.

    2006-02-01

    Steady state laser light propagation in diffuse media such as biological cells generally provide bulk parameter information, such as the mean free path and absorption, via the transmission profile. The accompanying optical speckle can be analyzed as a random spatial data series and its fractal dimension can be used to further classify biological media that show similar mean free path and absorption properties, such as those obtained from a single population. A population of yeast cells can be separated into different portions by centrifuge, and microscope analysis can be used to provide the population statistics. Fractal analysis of the speckle suggests that lower fractal dimension is associated with higher cell packing density. The spatial intensity correlation revealed that the higher cell packing gives rise to higher refractive index. A calibration sample system that behaves similar as the yeast samples in fractal dimension, spatial intensity correlation and diffusion was selected. Porous silicate slabs with different refractive index values controlled by water content were used for system calibration. The porous glass as well as the yeast random spatial data series fractal dimension was found to depend on the imaging resolution. The fractal method was also applied to fission yeast single cell fluorescent data as well as aging yeast optical data; and consistency was demonstrated. It is concluded that fractal analysis can be a high sensitivity tool for relative comparison of cell structure but that additional diffusion measurements are necessary for determining the optimal image resolution. Practical application to dental plaque bio-film and cam-pill endoscope images was also demonstrated.

  10. Fractal dimension and architecture of trabecular bone.

    PubMed

    Fazzalari, N L; Parkinson, I H

    1996-01-01

    The fractal dimension of trabecular bone was determined for biopsies from the proximal femur of 25 subjects undergoing hip arthroplasty. The average age was 67.7 years. A binary profile of the trabecular bone in the biopsy was obtained from a digitized image. A program written for the Quantimet 520 performed the fractal analysis. The fractal dimension was calculated for each specimen, using boxes whose sides ranged from 65 to 1000 microns in length. The mean fractal dimension for the 25 subjects was 1.195 +/- 0.064 and shows that in Euclidean terms the surface extent of trabecular bone is indeterminate. The Quantimet 520 was also used to perform bone histomorphometric measurements. These were bone volume/total volume (BV/TV) (per cent) = 11.05 +/- 4.38, bone surface/total volume (BS/TV) (mm2/mm3) = 1.90 +/- 0.51, trabecular thickness (Tb.Th) (mm) = 0.12 +/- 0.03, trabecular spacing (Tb.Sp) (mm) = 1.03 +/- 0.36, and trabecular number (Tb.N) (number/mm) = 0.95 +/- 0.25. Pearsons' correlation coefficients showed a statistically significant relationship between the fractal dimension and all the histomorphometric parameters, with BV/TV (r = 0.85, P < 0.0001), BS/TV (r = 0.74, P < 0.0001), Tb.Th (r = 0.50, P < 0.02), Tb.Sp (r = -0.81, P < 0.0001), and Tb.N (r = 0.76, P < 0.0001). This method for calculating fractal dimension shows that trabecular bone exhibits fractal properties over a defined box size, which is within the dimensions of a structural unit for trabecular bone. Therefore, the fractal dimension of trabecular bone provides a measure which does not rely on Euclidean descriptors in order to describe a complex geometry.

  11. Scene-Level Geographic Image Classification Based on a Covariance Descriptor Using Supervised Collaborative Kernel Coding

    PubMed Central

    Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi

    2016-01-01

    Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach. PMID:26999150

  12. Colour coding of intensity levels in CCD images

    NASA Astrophysics Data System (ADS)

    Neville, R. J.

    1995-06-01

    Present methods of displaying electronic images from Charge Coupled Device (CCD) cameras often fall short of ideal. The production of hard copy from computer printers or by the photography of monitors frequently limits the quantity and quality of information available. One way to improve the transfer of information from electronic files to human eye and brain is to see a spectrum of colours in addition to the usual brightness variations. The addition of colour gives an added dimension to the images and enables subtle variations in intensity to be more readily perceived.

  13. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    PubMed Central

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-01-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. PMID:27217194

  14. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    NASA Astrophysics Data System (ADS)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  15. Fractal analysis of cervical intraepithelial neoplasia.

    PubMed

    Fabrizii, Markus; Moinfar, Farid; Jelinek, Herbert F; Karperien, Audrey; Ahammer, Helmut

    2014-01-01

    Cervical intraepithelial neoplasias (CIN) represent precursor lesions of cervical cancer. These neoplastic lesions are traditionally subdivided into three categories CIN 1, CIN 2, and CIN 3, using microscopical criteria. The relation between grades of cervical intraepithelial neoplasia (CIN) and its fractal dimension was investigated to establish a basis for an objective diagnosis using the method proposed. Classical evaluation of the tissue samples was performed by an experienced gynecologic pathologist. Tissue samples were scanned and saved as digital images using Aperio scanner and software. After image segmentation the box counting method as well as multifractal methods were applied to determine the relation between fractal dimension and grades of CIN. A total of 46 images were used to compare the pathologist's neoplasia grades with the predicted groups obtained by fractal methods. Significant or highly significant differences between all grades of CIN could be found. The confusion matrix, comparing between pathologist's grading and predicted group by fractal methods showed a match of 87.1%. Multifractal spectra were able to differentiate between normal epithelium and low grade as well as high grade neoplasia. Fractal dimension can be considered to be an objective parameter to grade cervical intraepithelial neoplasia.

  16. Line graphs for fractals

    NASA Astrophysics Data System (ADS)

    Warchalowski, Wiktor; Krawczyk, Malgorzata J.

    2017-03-01

    We found the Lindenmayer systems for line graphs built on selected fractals. We show that the fractal dimension of such obtained graphs in all analysed cases is the same as for their original graphs. Both for the original graphs and for their line graphs we identified classes of nodes which reflect symmetry of the graph.

  17. Low-bit-rate subband image coding with matching pursuits

    NASA Astrophysics Data System (ADS)

    Rabiee, Hamid; Safavian, S. R.; Gardos, Thomas R.; Mirani, A. J.

    1998-01-01

    In this paper, a novel multiresolution algorithm for low bit-rate image compression is presented. High quality low bit-rate image compression is achieved by first decomposing the image into approximation and detail subimages with a shift-orthogonal multiresolution analysis. Then, at the coarsest resolution level, the coefficients of the transformation are encoded by an orthogonal matching pursuit algorithm with a wavelet packet dictionary. Our dictionary consists of convolutional splines of up to order two for the detail and approximation subbands. The intercorrelation between the various resolutions is then exploited by using the same bases from the dictionary to encode the coefficients of the finer resolution bands at the corresponding spatial locations. To further exploit the spatial correlation of the coefficients, the zero trees of wavelets (EZW) algorithm was used to identify the potential zero trees. The coefficients of the presentation are then quantized and arithmetic encoded at each resolution, and packed into a scalable bit stream structure. Our new algorithm is highly bit-rate scalable, and performs better than the segmentation based matching pursuit and EZW encoders at lower bit rates, based on subjective image quality and peak signal-to-noise ratio.

  18. DCT/DST-based transform coding for intra prediction in image/video coding.

    PubMed

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  19. Investigation of the Near Wake Flow Structure of a Fractal Square Turbulence Grid using Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Uzol, Oguz; Hazaveh, Hooman Amiri

    2016-11-01

    The 3D flow structure within the near wake of a four-iteration fractal square turbulence grid is obtained by combining data from closely-spaced horizontal and vertical two-dimensional PIV measurement planes. For this purpose, the grid is placed inside the entrance of the test section of a suction type wind tunnel. The experiments are conducted at a freestream velocity of 8 m/s, corresponding to a Reynolds number of 9000 based on effective mesh size. The freestream turbulence intensity is about 0.5%. The measurement volume extends about 7 effective mesh sizes downstream of the grid. Within the measurement volume, 1000 vector maps are obtained on each one of the 220 horizontal and 220 vertical 2D PIV planes that are separated from each other by 500 microns, which is close to the in-plane vector spacing of 600 microns. This dataset allowed us to calculate all components of mean velocity, velocity gradient tensor, and vorticity as well as the Reynolds stress tensor except for (v'w') component. Using the analyzed three-dimensional mean flow field data, we are able to observe the three dimensional structure of the wakes of the largest bars that dominate the flow field and the corresponding turbulence generation characteristics. We focus on how the wake geometry, decay characteristics and stress-strain relations get impacted by the presence of smaller wakes surrounding the largest wake regions. We also investigate how 3D mean flow non-uniformities get generated downstream of the grid such as lateral contraction and bulging of mean wake shapes.

  20. Fractal structures and processes

    SciTech Connect

    Bassingthwaighte, J.B.; Beard, D.A.; Percival, D.B.; Raymond, G.M.

    1996-06-01

    Fractals and chaos are closely related. Many chaotic systems have fractal features. Fractals are self-similar or self-affine structures, which means that they look much of the same when magnified or reduced in scale over a reasonably large range of scales, at least two orders of magnitude and preferably more (Mandelbrot, 1983). The methods for estimating their fractal dimensions or their Hurst coefficients, which summarize the scaling relationships and their correlation structures, are going through a rapid evolutionary phase. Fractal measures can be regarded as providing a useful statistical measure of correlated random processes. They also provide a basis for analyzing recursive processes in biology such as the growth of arborizing networks in the circulatory system, airways, or glandular ducts. {copyright} {ital 1996 American Institute of Physics.}

  1. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    radar cross - section (stealth) targets. Radar images such as High Range Resolution profiles, and Synthetic Aperture Radar / Inverse Synthetic Aperture...target is a precisely designed and machined engineering test target containing standard radar reflector primitive shapes such as flat plates, dihedrals ...compute radar images of aircraft. The code was developed by Thales Defence Information Systems, UK. FACETS computes the radar cross - section and SAR image

  2. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    PubMed

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  3. Adaptive bit truncation and compensation method for EZW image coding

    NASA Astrophysics Data System (ADS)

    Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao

    2003-09-01

    The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.

  4. Wavelet-based image coding using saliency map

    NASA Astrophysics Data System (ADS)

    Vargic, Radoslav; Kučerová, Júlia; Polec, Jaroslav

    2016-11-01

    Visual information is very important in human perceiving of the surrounding world. During the observation of the considered scene, some image parts are more salient than others. This fact is conventionally addressed using the regions of interest approach. We are presenting an approach that captures the saliency information per pixel basis using one continuous saliency map for a whole image and which is directly used in the lossy image compression algorithm. Although for the encoding/decoding part of the algorithm, the notion region is not necessary anymore; the resulting method can, due to its nature, efficiently emulate large amounts of regions of interest with various significance. We provide reference implementation of this approach based on the set partitioning in hierarchical trees (SPIHT) algorithm and show that the proposed method is effective and has potential to achieve significantly better results in comparison to the original SPIHT algorithm. The approach is not limited to SPIHT algorithm and can be coupled with, e.g., JPEG 2000 as well.

  5. Ultrasound Elasticity Imaging System with Chirp-Coded Excitation for Assessing Biomechanical Properties of Elasticity Phantom

    PubMed Central

    Chun, Guan-Chun; Chiang, Hsing-Jung; Lin, Kuan-Hung; Li, Chien-Ming; Chen, Pei-Jarn; Chen, Tainsong

    2015-01-01

    The biomechanical properties of soft tissues vary with pathological phenomenon. Ultrasound elasticity imaging is a noninvasive method used to analyze the local biomechanical properties of soft tissues in clinical diagnosis. However, the echo signal-to-noise ratio (eSNR) is diminished because of the attenuation of ultrasonic energy by soft tissues. Therefore, to improve the quality of elastography, the eSNR and depth of ultrasound penetration must be increased using chirp-coded excitation. Moreover, the low axial resolution of ultrasound images generated by a chirp-coded pulse must be increased using an appropriate compression filter. The main aim of this study is to develop an ultrasound elasticity imaging system with chirp-coded excitation using a Tukey window for assessing the biomechanical properties of soft tissues. In this study, we propose an ultrasound elasticity imaging system equipped with a 7.5-MHz single-element transducer and polymethylpentene compression plate to measure strains in soft tissues. Soft tissue strains were analyzed using cross correlation (CC) and absolution difference (AD) algorithms. The optimal parameters of CC and AD algorithms used for the ultrasound elasticity imaging system with chirp-coded excitation were determined by measuring the elastographic signal-to-noise ratio (SNRe) of a homogeneous phantom. Moreover, chirp-coded excitation and short pulse excitation were used to measure the elasticity properties of the phantom. The elastographic qualities of the tissue-mimicking phantom were assessed in terms of Young’s modulus and elastographic contrast-to-noise ratio (CNRe). The results show that the developed ultrasound elasticity imaging system with chirp-coded excitation modulated by a Tukey window can acquire accurate, high-quality elastography images. PMID:28793718

  6. Fractal metrology for biogeosystems analysis

    NASA Astrophysics Data System (ADS)

    Torres-Argüelles, V.; Oleschko, K.; Tarquis, A. M.; Korvin, G.; Gaona, C.; Parrot, J.-F.; Ventura-Ramos, E.

    2010-06-01

    The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes. In the present research, this pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate'' Clay) and compared in terms of roughness of the gray-intensity distribution (the measurand) quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them and to the measurement function which best fits to the experimental results. Some of the applied techniques are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc.) while the others have been recently developed by our Group. The combination of all these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM). We show the usefulness of FM through a case study of soil physical and chemical degradation applying the selected toolbox to describe and compare the main structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.

  7. Fractal Metrology for biogeosystems analysis

    NASA Astrophysics Data System (ADS)

    Torres-Argüelles, V.; Oleschko, K.; Tarquis, A. M.; Korvin, G.; Gaona, C.; Parrot, J.-F.; Ventura-Ramos, E.

    2010-11-01

    The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes of this complex system. In the present research, we studied the aggregation process as self-organizing and operating near a critical point. The structural pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate" Clay) and compared in terms of roughness of the gray-intensity distribution quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them measured in terms of standard deviation. Some of the applied methods are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc.) while the others have been recently developed by our Group. The combination of these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM). We show the usefulness of FM for complex systems analysis through a case study of the soil's physical and chemical degradation applying the selected toolbox to describe and compare the structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.

  8. Color-Coded Super-Resolution Small-Molecule Imaging.

    PubMed

    Beuzer, Paolo; La Clair, James J; Cang, Hu

    2016-06-02

    Although the development of super-resolution microscopy dates back to 1994, its applications have been primarily focused on visualizing cellular structures and targets, including proteins, DNA and sugars. We now report on a system that allows both monitoring of the localization of exogenous small molecules in live cells at low resolution and subsequent super-resolution imaging by using stochastic optical reconstruction microscopy (STORM) on fixed cells. This represents a powerful new tool to understand the dynamics of subcellular trafficking associated with the mode and mechanism of action of exogenous small molecules.

  9. Classified JPEG coding of mixed document images for printing.

    PubMed

    Ramos, M G; de Queiroz, R L

    2000-01-01

    This paper presents a modified JPEG coder that is applied to the compression of mixed documents (containing text, natural images, and graphics) for printing purposes. The modified JPEG coder proposed in this paper takes advantage of the distinct perceptually significant regions in these documents to achieve higher perceptual quality than the standard JPEG coder. The region-adaptivity is performed via classified thresholding being totally compliant with the baseline standard. A computationally efficient classification algorithm is presented, and the improved performance of the classified JPEG coder is verified.

  10. Non-Uniform Contrast and Noise Correction for Coded Source Neutron Imaging

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R

    2012-01-01

    Since the first application of neutron radiography in the 1930s, the field of neutron radiography has matured enough to develop several applications. However, advances in the technology are far from concluded. In general, the resolution of scintillator-based detection systems is limited to the $10\\mu m$ range, and the relatively low neutron count rate of neutron sources compared to other illumination sources restricts time resolved measurement. One path toward improved resolution is the use of magnification; however, to date neutron optics are inefficient, expensive, and difficult to develop. There is a clear demand for cost-effective scintillator-based neutron imaging systems that achieve resolutions of $1 \\mu m$ or less. Such imaging system would dramatically extend the application of neutron imaging. For such purposes a coded source imaging system is under development. The current challenge is to reduce artifacts in the reconstructed coded source images. Artifacts are generated by non-uniform illumination of the source, gamma rays, dark current at the imaging sensor, and system noise from the reconstruction kernel. In this paper, we describe how to pre-process the coded signal to reduce noise and non-uniform illumination, and how to reconstruct the coded signal with three reconstruction methods correlation, maximum likelihood estimation, and algebraic reconstruction technique. We illustrates our results with experimental examples.

  11. IRMA Code II: unique annotation of medical images for access and retrieval.

    PubMed

    Piesch, Tim-Christian; Müller, Henning; Kuhl, Christiane K; Deserno, Thomas M

    2012-01-01

    Content-based image retrieval (CBIR) provides novel options to access large repositories of medical images, in particular for storing, querying and reporting. This requires a revisit of nomenclatures for image classification such as DICOM, SNOMED, and RadLex. For instance, DICOM defines only about 20 concept terms for body regions, which partly overlap. This is insufficient to access the visual image characteristics. In 2002, the Image Retrieval in Medical Applications (IRMA) project proposed a mono-hierarchic, multi-axial coding scheme called IRMA Code. It was used in the Cross Language Evaluation Forum (ImageCLEF) annotation tasks. Ten years of experience have discovered several weak points. In this paper, we propose eight axes of three levels in hierarchy for (A) anatomy, (B) biological system, (C) configuration, (D) direction, (E) equipment, (F) finding, (G) generation, and (H) human maneuver as well as additional flags for age class, body side, contrast agent, ethnicity, finding certainty, gender, quality, and scanned film, which are captured in form of another axis (I). Using a tag-based notation IRMA Code II supports multiple selection coding within one axis, which is required for the new main categories.

  12. Large deformation image classification using generalized locality-constrained linear coding.

    PubMed

    Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.

  13. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  14. Hierarchical prediction and context adaptive coding for lossless color image compression.

    PubMed

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  15. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  16. Analysis of image content recognition algorithm based on sparse coding and machine learning

    NASA Astrophysics Data System (ADS)

    Xiao, Yu

    2017-03-01

    This paper presents an image classification algorithm based on spatial sparse coding model and random forest. Firstly, SIFT feature extraction of the image; and then use the sparse encoding theory to generate visual vocabulary based on SIFT features, and using the visual vocabulary of SIFT features into a sparse vector; through the combination of regional integration and spatial sparse vector, the sparse vector gets a fixed dimension is used to represent the image; at last random forest classifier for image sparse vectors for training and testing, using the experimental data set for standard test Caltech-101 and Scene-15. The experimental results show that the proposed algorithm can effectively represent the features of the image and improve the classification accuracy. In this paper, we propose an innovative image recognition algorithm based on image segmentation, sparse coding and multi instance learning. This algorithm introduces the concept of multi instance learning, the image as a multi instance bag, sparse feature transformation by SIFT images as instances, sparse encoding model generation visual vocabulary as the feature space is mapped to the feature space through the statistics on the number of instances in bags, and then use the 1-norm SVM to classify images and generate sample weights to select important image features.

  17. Lensless coded-aperture imaging with separable Doubly-Toeplitz masks

    NASA Astrophysics Data System (ADS)

    DeWeert, Michael J.; Farm, Brian P.

    2015-02-01

    In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within a reasonable weight and volume. One solution is to use coded apertures-opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, then the images can be decoded and a clear image computed. Recently, computational imaging and the search for a means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, modified uniformly redundant arrays (MURAs), are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: (1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and (2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. We present a new class of coded apertures, separable Doubly-Toeplitz masks, which are efficiently decodable even for very large images-orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-light-modulators. Imaging experiments confirmed the effectiveness of separable Doubly-Toeplitz masks-images collected in natural light of extended outdoor scenes are rendered clearly.

  18. A coded aperture compressive imaging array and its visual detection and tracking algorithms for surveillance systems.

    PubMed

    Chen, Jing; Wang, Yongtian; Wu, Hanxiao

    2012-10-29

    In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l(1) optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l(1) tracker without any optimization.

  19. Image enhancement using MCNP5 code and MATLAB in neutron radiography.

    PubMed

    Tharwat, Montaser; Mohamed, Nader; Mongy, T

    2014-07-01

    This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Thermodynamics of fractal universe

    NASA Astrophysics Data System (ADS)

    Sheykhi, Ahmad; Teimoori, Zeinab; Wang, Bin

    2013-01-01

    We investigate the thermodynamical properties of the apparent horizon in a fractal universe. We find that one can always rewrite the Friedmann equation of the fractal universe in the form of the entropy balance relation δQ=ThdSh, where δQ and Th are the energy flux and Unruh temperature seen by an accelerated observer just inside the apparent horizon. We find that the entropy Sh consists two terms, the first one which obeys the usual area law and the second part which is the entropy production term due to nonequilibrium thermodynamics of fractal universe. This shows that in a fractal universe, a treatment with nonequilibrium thermodynamics of spacetime may be needed. We also study the generalized second law of thermodynamics in the framework of fractal universe. When the temperature of the apparent horizon and the matter fields inside the horizon are equal, i.e. T=Th, the generalized second law of thermodynamics can be fulfilled provided the deceleration and the equation of state parameters ranges either as -1⩽q<0, -1⩽w<-1/3 or as q<-1, w<-1 which are consistent with recent observations. We also find that for Th=bT, with b<1, the GSL of thermodynamics can be secured in a fractal universe by suitably choosing the fractal parameter β.

  1. SOC and Fractal Geometry

    NASA Astrophysics Data System (ADS)

    McAteer, R. T. J.

    2013-06-01

    When Mandelbrot, the father of modern fractal geometry, made this seemingly obvious statement he was trying to show that we should move out of our comfortable Euclidean space and adopt a fractal approach to geometry. The concepts and mathematical tools of fractal geometry provides insight into natural physical systems that Euclidean tools cannot do. The benet from applying fractal geometry to studies of Self-Organized Criticality (SOC) are even greater. SOC and fractal geometry share concepts of dynamic n-body interactions, apparent non-predictability, self-similarity, and an approach to global statistics in space and time that make these two areas into naturally paired research techniques. Further, the iterative generation techniques used in both SOC models and in fractals mean they share common features and common problems. This chapter explores the strong historical connections between fractal geometry and SOC from both a mathematical and conceptual understanding, explores modern day interactions between these two topics, and discusses how this is likely to evolve into an even stronger link in the near future.

  2. Improved Fourier-based characterization of intracellular fractal features

    PubMed Central

    Xylas, Joanna; Quinn, Kyle P.; Hunter, Martin; Georgakoudi, Irene

    2012-01-01

    A novel Fourier-based image analysis method for measuring fractal features is presented which can significantly reduce artifacts due to non-fractal edge effects. The technique is broadly applicable to the quantitative characterization of internal morphology (texture) of image features with well-defined borders. In this study, we explore the capacity of this method for quantitative assessment of intracellular fractal morphology of mitochondrial networks in images of normal and diseased (precancerous) epithelial tissues. Using a combination of simulated fractal images and endogenous two-photon excited fluorescence (TPEF) microscopy, our method is shown to more accurately characterize the exponent of the high-frequency power spectral density (PSD) of these images in the presence of artifacts that arise due to cellular and nuclear borders. PMID:23188308

  3. How important is tumour shape? Quantification of the epithelial-connective tissue interface in oral lesions using local connected fractal dimension analysis.

    PubMed

    Landini, G; Rippin, J W

    1996-06-01

    Quantification of the local complexity of the epithelial-connective tissue interface (ECTI) in normal mucosa, epithelial dysplasia, and squamous cell carcinoma of the floor of the mouth was investigated by estimating the local connected fractal dimension in tissue profiles from histological sections. The use of certain parameters of the distribution of the local connected fractal dimensions of the ECTI classifies the cases belonging to these three histopathological diagnoses with 85 per cent accuracy by means of linear discriminant analysis. The values of the local fractal dimension were also used to produce colour-coded dimensional images of the ECTI, to highlight locations with higher irregularity that may correlate with locally invasive 'higher-risk' areas.

  4. Improving the Calibration of Image Sensors Based on IOFBs, Using Differential Gray-Code Space Encoding

    PubMed Central

    Fernández, Pedro R.; Galilea, José Luis Lázaro; Vicente, Alfredo Gardel; Muñoz, Ignacio Bravo; Cano García, Ángel E.; Vázquez, Carlos Luna

    2012-01-01

    This paper presents a fast calibration method to determine the transfer function for spatial correspondences in image transmission devices with Incoherent Optical Fiber Bundles (IOFBs), by performing a scan of the input, using differential patterns generated from a Gray code (Differential Gray-Code Space Encoding, DGSE). The results demonstrate that this technique provides a noticeable reduction in processing time and better quality of the reconstructed image compared to other, previously employed techniques, such as point or fringe scanning, or even other known space encoding techniques. PMID:23012530

  5. Context-Aware and Locality-Constrained Coding for Image Categorization

    PubMed Central

    2014-01-01

    Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts. PMID:24977215

  6. Multiple description distributed image coding with side information for mobile wireless transmission

    NASA Astrophysics Data System (ADS)

    Wu, Min; Song, Daewon; Chen, Chang Wen

    2005-03-01

    Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet

  7. A new pad-based neutron detector for stereo coded aperture thermal neutron imaging

    NASA Astrophysics Data System (ADS)

    Dioszegi, I.; Yu, B.; Smith, G.; Schaknowski, N.; Fried, J.; Vanier, P. E.; Salwen, C.; Forman, L.

    2014-09-01

    A new coded aperture thermal neutron imager system has been developed at Brookhaven National Laboratory. The cameras use a new type of position-sensitive 3He-filled ionization chamber, in which an anode plane is composed of an array of pads with independent acquisition channels. The charge is collected on each of the individual 5x5 mm2 anode pads, (48x48 in total, corresponding to 24x24 cm2 sensitive area) and read out by application specific integrated circuits (ASICs). The new design has several advantages for coded-aperture imaging applications in the field, compared to the previous generation of wire-grid based neutron detectors. Among these are its rugged design, lighter weight and use of non-flammable stopping gas. The pad-based readout occurs in parallel circuits, making it capable of high count rates, and also suitable to perform data analysis and imaging on an event-by-event basis. The spatial resolution of the detector can be better than the pixel size by using a charge sharing algorithm. In this paper we will report on the development and performance of the new pad-based neutron camera, describe a charge sharing algorithm to achieve sub-pixel spatial resolution and present the first stereoscopic coded aperture images of thermalized neutron sources using the new coded aperture thermal neutron imager system.

  8. Computed neutron tomography and coded aperture holography from real time neutron images

    NASA Astrophysics Data System (ADS)

    Sulcoski, Mark F.

    1986-10-01

    The uses of neutron tomography and holography for nondestructive evaluation applications are developed and investigated. The use of a real time neutron imaging system coupled with a image processing system to obtain neutron tomographs. Experiments utilized a Thomson-CSF neutron camera coupled to a computer based system used for image processing. Experiments included a configuration of a reactor neutron beam port for neutron imaging, development and implementation of a convolution method tomographic algorithm suitable for neutron imaging. Results to date have demonstrated the proof of principle of this neutron tomography system. Coded aperture neutron holography is under investigation using a cadmium Fresnel zone plate as the coded aperture and the real time imaging system as the detection and holographic reconstruction system. Coded aperture imaging utilizes the zone place to encode scattered radiation pattern recorded at the detector is used as input data to a convolution algorithm which reconstructs the scattering source. This technique has not yet been successfully implemented and is still under development.

  9. Chirp-Coded Ultraharmonic Imaging with a Modified Clinical Intravascular Ultrasound System.

    PubMed

    Shekhar, Himanshu; Huntzicker, Steven; Awuor, Ivy; Doyley, Marvin M

    2016-11-01

    Imaging plaque microvasculature with contrast-enhanced intravascular ultrasound (IVUS) could help clinicians evaluate atherosclerosis and guide therapeutic interventions. In this study, we evaluated the performance of chirp-coded ultraharmonic imaging using a modified IVUS system (iLab™, Boston Scientific/Scimed) equipped with clinically available peripheral and coronary imaging catheters. Flow phantoms perfused with a phospholipid-encapsulated contrast agent were visualized using ultraharmonic imaging at 12 MHz and 30 MHz transmit frequencies. Flow channels with diameters as small as 0.8 mm and 0.5 mm were visualized using the peripheral and coronary imaging catheters. Radio-frequency signals were acquired at standard IVUS rotation speed, which resulted in a frame rate of 30 frames/s. Contrast-to-tissue ratios up to 17.9 ± 1.11 dB and 10.7 ± 2.85 dB were attained by chirp-coded ultraharmonic imaging at 12 MHz and 30 MHz transmit frequencies, respectively. These results demonstrate the feasibility of performing ultraharmonic imaging at standard frame rates with clinically available IVUS catheters using chirp-coded excitation.

  10. Improved coded exposure for enhancing imaging quality and detection accuracy of moving targets

    NASA Astrophysics Data System (ADS)

    Mao, Baoqi; Chen, Li; Han, Lin; Shen, Weimin

    2016-09-01

    The blur due to the rapidly relative motion between scene and camera during exposure has the well-known influence on the quality of acquired image and then target detection. An improved coded exposure is introduced in this paper to remove the image blur and obtain high quality image, so that the test accuracy of the surface defect and edge contour of motion objects can be enhanced. The improved exposure method takes advantage of code look-up table to control exposure process and image restoration. The restored images have higher Peak Signal-to-Noise Ratio (PSNR) and Structure SIMilarity (SSIM) than traditional deblur algorithm such as Wiener and regularization filter methods. The edge contour and defect of part samples, which move at constant speed relative to the industry camera used in our experiment, are detected with Sobel operator from the restored images. Experimental results verify that the improved coded exposure is better suitable for imaging moving object and detecting moving target than the traditional.

  11. Context Tree-Based Image Contour Coding Using a Geometric Prior

    NASA Astrophysics Data System (ADS)

    Zheng, Amin; Cheung, Gene; Florencio, Dinei

    2017-02-01

    If object contours in images are coded efficiently as side information, then they can facilitate advanced image / video coding techniques, such as graph Fourier transform coding or motion prediction of arbitrarily shaped pixel blocks. In this paper, we study the problem of lossless and lossy compression of detected contours in images. Specifically, we first convert a detected object contour composed of contiguous between-pixel edges to a sequence of directional symbols drawn from a small alphabet. To encode the symbol sequence using arithmetic coding, we compute an optimal variable-length context tree (VCT) $\\mathcal{T}$ via a maximum a posterior (MAP) formulation to estimate symbols' conditional probabilities. MAP prevents us from overfitting given a small training set $\\mathcal{X}$ of past symbol sequences by identifying a VCT $\\mathcal{T}$ that achieves a high likelihood $P(\\mathcal{X}|\\mathcal{T})$ of observing $\\mathcal{X}$ given $\\mathcal{T}$, and a large geometric prior $P(\\mathcal{T})$ stating that image contours are more often straight than curvy. For the lossy case, we design efficient dynamic programming (DP) algorithms that optimally trade off coding rate of an approximate contour $\\hat{\\mathbf{x}}$ given a VCT $\\mathcal{T}$ with two notions of distortion of $\\hat{\\mathbf{x}}$ with respect to the original contour $\\mathbf{x}$. To reduce the size of the DP tables, a total suffix tree is derived from a given VCT $\\mathcal{T}$ for compact table entry indexing, reducing complexity. Experimental results show that for lossless contour coding, our proposed algorithm outperforms state-of-the-art context-based schemes consistently for both small and large training datasets. For lossy contour coding, our algorithms outperform comparable schemes in the literature in rate-distortion performance.

  12. Quantum image pseudocolor coding based on the density-stratified method

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  13. Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging

    PubMed Central

    Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin

    2015-01-01

    We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205

  14. Source reconstruction for neutron coded-aperture imaging: A sparse method.

    PubMed

    Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang

    2017-08-01

    Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.

  15. Hexagonal Uniformly Redundant Arrays (HURAs) for scintillator based coded aperture neutron imaging

    SciTech Connect

    Gamage, K.A.A.; Zhou, Q.

    2015-07-01

    A series of Monte Carlo simulations have been conducted, making use of the EJ-426 neutron scintillator detector, to investigate the potential of using hexagonal uniformly redundant arrays (HURAs) for scintillator based coded aperture neutron imaging. This type of scintillator material has a low sensitivity to gamma rays, therefore, is of particular use in a system with a source that emits both neutrons and gamma rays. The simulations used an AmBe source, neutron images have been produced using different coded-aperture materials (boron- 10, cadmium-113 and gadolinium-157) and location error has also been estimated. In each case the neutron image clearly shows the location of the source with a relatively small location error. Neutron images with high resolution can be easily used to identify and locate nuclear materials precisely in nuclear security and nuclear decommissioning applications. (authors)

  16. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    PubMed

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  17. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark

    SciTech Connect

    Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei; Chen Da; Li Zhenghong; Wu Yuelei

    2012-11-15

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  18. Source reconstruction for neutron coded-aperture imaging: A sparse method

    NASA Astrophysics Data System (ADS)

    Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang

    2017-08-01

    Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.

  19. Fractal frontiers in cardiovascular magnetic resonance: towards clinical implementation.

    PubMed

    Captur, Gabriella; Karperien, Audrey L; Li, Chunming; Zemrak, Filip; Tobon-Gomez, Catalina; Gao, Xuexin; Bluemke, David A; Elliott, Perry M; Petersen, Steffen E; Moon, James C

    2015-09-07

    Many of the structures and parameters that are detected, measured and reported in cardiovascular magnetic resonance (CMR) have at least some properties that are fractal, meaning complex and self-similar at different scales. To date however, there has been little use of fractal geometry in CMR; by comparison, many more applications of fractal analysis have been published in MR imaging of the brain.This review explains the fundamental principles of fractal geometry, places the fractal dimension into a meaningful context within the realms of Euclidean and topological space, and defines its role in digital image processing. It summarises the basic mathematics, highlights strengths and potential limitations of its application to biomedical imaging, shows key current examples and suggests a simple route for its successful clinical implementation by the CMR community.By simplifying some of the more abstract concepts of deterministic fractals, this review invites CMR scientists (clinicians, technologists, physicists) to experiment with fractal analysis as a means of developing the next generation of intelligent quantitative cardiac imaging tools.

  20. Modeling Fractal Dynamics

    NASA Astrophysics Data System (ADS)

    West, Bruce J.

    The proper methodology for describing the dynamics of certain complex phenomena and fractal time series is the fractional calculus through the fractional Langevin equation discussed herein and applied in a biomedical context. We show that a fractional operator (derivative or integral) acting on a fractal function, yields another fractal function, allowing us to construct a fractional Langevin equation to describe the evolution of a fractal statistical process, for example, human gait and cerebral blood flow. The goal of this talk is to make clear how certain complex phenomena, such as those that are abundantly present in human physiology, can be faithfully described using dynamical models involving fractional differential stochastic equations. These models are tested against existing data sets and shown to describe time series from complex physiologic phenomena quite well.