Sample records for fractal image compression

  1. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  2. A comparison of the fractal and JPEG algorithms

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Shahshahani, M.

    1991-01-01

    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.

  3. Perceptually lossless fractal image compression

    NASA Astrophysics Data System (ADS)

    Lin, Huawu; Venetsanopoulos, Anastasios N.

    1996-02-01

    According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.

  4. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  5. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  6. Fractal-Based Image Compression

    DTIC Science & Technology

    1990-01-01

    used Ziv - Lempel - experiments and for software development. Addi- Welch compression algorithm (ZLW) [51 [4] was used tional thanks to Roger Boss, Bill...vol17no. 6 (June 4) and with the minimum number of maps. [5] J. Ziv and A. Lempel , Compression of !ndivid- 5 Summary ual Sequences via Variable-Rate...transient and should be discarded. 2.5 Collage Theorem algorithm2 C3.2 Deterministic Algorithm for IFS Attractor For fast image compression the best

  7. Fractal-Based Image Compression, II

    DTIC Science & Technology

    1990-06-01

    data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the

  8. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  9. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  10. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  11. Predicting beauty: fractal dimension and visual complexity in art.

    PubMed

    Forsythe, A; Nadal, M; Sheehy, N; Cela-Conde, C J; Sawey, M

    2011-02-01

    Visual complexity has been known to be a significant predictor of preference for artistic works for some time. The first study reported here examines the extent to which perceived visual complexity in art can be successfully predicted using automated measures of complexity. Contrary to previous findings the most successful predictor of visual complexity was Gif compression. The second study examined the extent to which fractal dimension could account for judgments of perceived beauty. The fractal dimension measure accounts for more of the variance in judgments of perceived beauty in visual art than measures of visual complexity alone, particularly for abstract and natural images. Results also suggest that when colour is removed from an artistic image observers are unable to make meaningful judgments as to its beauty. ©2010 The British Psychological Society.

  12. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    PubMed

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  13. Determination of Uniaxial Compressive Strength of Ankara Agglomerate Considering Fractal Geometry of Blocks

    NASA Astrophysics Data System (ADS)

    Coskun, Aycan; Sonmez, Harun; Ercin Kasapoglu, K.; Ozge Dinc, S.; Celal Tunusluoglu, M.

    2010-05-01

    The uniaxial compressive strength (UCS) of rock material is a crucial parameter to be used for design stages of slopes, tunnels and foundations to be constructed in/on geological medium. However, preparation of high quality cores from geological mixtures or fragmented rocks such as melanges, fault rocks, coarse pyroclastic rocks, breccias and sheared serpentinites is often extremely difficult. According to the studies performed in literature, this type of geological materials may be grouped as welded and unwelded birmocks. Success of preparation of core samples from welded bimrocks is slightly better than unwelded ones. Therefore, some studies performed on the welded bimrocks to understand the mechanical behavior of geological mixture materials composed of stronger and weaker components (Gokceoglu, 2002; Sonmez et al., 2004; Sonmez et al., 2006; Kahraman, et al., 2008). The overall strength of bimrocks are generally depends on strength contrast between blocks and matrix; types and strength of matrix; type, size, strength, shape and orientation of blocks and volumetric block proportion. In previously proposed prediction models, while UCS of unwelded bimrocks may be determined by decreasing the UCS of matrix considering the volumetric block proportion, the welded ones can be predicted by considering both UCS of matrix and blocks together (Lindquist, 1994; Lindquist and Goodman, 1994; Sonmez et al., 2006 and Sonmez et al., 2009). However, there is a few attempts were performed about the effect of blocks shape and orientation on the strength of bimrock (Linqduist, 1994 and Kahraman, et al., 2008). In this study, Ankara agglomerate, which is composed of andesite blocks and surrounded weak tuff matrix, was selected as study material. Image analyses were performed on bottom, top and side faces of cores to identify volumetric block portions. In addition to the image analyses, andesite blocks on bottom, top and side faces were digitized for determination of fractal dimensions. To determine fractal dimensions of more than hundred andesite blocks in cores, a computer program namely FRACRUN were developed. Fractal geometry has been used as practical and popular tool to define particularly irregular shaped bodies in literature since the theory of fractal was developed by Mandelbrot (1967) (Hyslip and Vallejo, 1997; Kruhl and Nega, 1996; Bagde etal., 2002; Gulbin and Evangulova, 2003; Pardini, 2003; Kolay and Kayabali, 2006; Hamdi, 2008; Zorlu, 2009 and Sezer, 2009). Although there are some methods to determine fractal dimensions, square grid-cell count method for 2D and segment count method for 1D were followed in the algorithm of FRACRUN. FRACRUN has capable of determine fractal dimensions of many closed polygons on a single surface. In the study, a database composed of uniaxial compressive strength, volumetric block proportion, fractal dimensions and number of blocks for each core was established. Finally, prediction models were developed by regression analyses and compared with the empirical equations proposed by Sonmez et al. (2006). Acknowledgement This study is a product of ongoing project supported by TUBITAK (The Scientific and Technological Research Council of Turkey - Project No: 108Y002). References Bagde, M.N., Raina, A.K., Chakraborty, A.K., Jethwa, J.L., 2002. Rock mass characterization by fractal dimension. Engineering Geology 63, 141-155. Gokceoglu, C., 2002. A fuzzy triangular chart to predict the uniaxial compressive strength of the Ankara agglomerates from their petrographic composition. Engineering Geology, 66 (1-2), 39-51. Gulbin, Y.L., Evangulova, E.B., 2003. Morphometry of quartz aggregates in granites: fractal images referring to nucleation and growth processes. Mathematical Geology 35 (7), 819-833 Hamdi, E., 2008. A fractal description of simulated 3D discontinuity networks. Rock Mechanics and Rock Engineering 41, 587-599. Hyslip, J.P., Vallejo, L.E., 1997. Fractals analysis of the roughness and size distribution of granular materials. Engineering Geology 48, 231-244. Kahraman, S., Alber, M., Fener, M. and Gunaydin, O. 2008. Evaluating the geomechanical properties of Misis fault breccia (Turkey). Int. J. Rock Mech. Min. Sci, 45, (8), 1469-1479. Kolay, E., Kayabali, K., 2006. Investigation of the effect of aggregate shape and surface roughness on the slake durability index using the fractal dimension approach. Engineering Geology 86, 271-294. Kruhl, J.H., Nega, M., 1996. The fractal shape of sutured quartz grain boundaries: application as a geothermometer. Geologische Rundschau 85, 38-43. Lindquist E.S. 1994. The strength, deformation properties of melange. PhD thesis, University of California, Berkeley, 1994. 264p. Lindquist E.S. and Goodman R.E. 1994. The strength and deformation properties of the physical model m!elange. In: Nelson PP, Laubach SE, editors. Proceedings of the First North American Rock Mechanics Conference (NARMS), Austin, Texas. Rotterdam: AA Balkema; 1994. Pardini, G., 2003. Fractal scaling of surface roughness in artificially weathered smectite rich soil regoliths. Geoderma 117, 157-167. Sezer E., 2009. A computer program for fractal dimension (FRACEK) with application on type of mass movement characterization. Computers and Geosciences (doi:10.1016/j.cageo.2009.04.006). Sonmez H, Tuncay E, and Gokceoglu C., 2004. Models to predict the uniaxial compressive strength and the modulus of elasticity for Ankara Agglomerate. Int. J. Rock Mech. Min. Sci., 41 (5), 717-729. Sonmez, H., Gokceoglu, C., Medley, E.W., Tuncay, E., and Nefeslioglu, H.A., 2006. Estimating the uniaxial compressive strength of a volcanic bimrock. Int. J. Rock Mech. Min. Sci., 43 (4), 554-561. Zorlu K., 2008. Description of the weathering states of building stones by fractal geometry and fuzzy inference system in the Olba ancient city (Southern Turkey). Engineering Geology 101 (2008) 124-133.

  14. Fractal-Based Image Compression

    DTIC Science & Technology

    1989-09-01

    6. A Mercedes Benz symbol generated using an IFS code ................. 21 7. (a) U-A fern and (b) A-0 fern generated with RIFS codes...22 8. Construction of the Mercedes - Benz symbol using RIFS ................ 23 9. The regenerated perfect image of the Mercedes - Benz symbol using R IF...quite often, it cannot be done with a reasonable number of transforms. As an example, the Mercedes Benz symbol generated using an IFS code is illustrated

  15. A Comparison of Some of the Most Current Methods of Image Compression

    DTIC Science & Technology

    1993-06-01

    found FrameMaker (available on the Sun) to be very flexible with imported files. It requires them to be in Raster format. 51 5. Access a. Fractal...account manager must set up a Sun account for access to FrameMaker , Sunvision, etc. Be sure to specify the utilities needed when signing up for an

  16. Effect of Fractal Dimension on the Strain Behavior of Particulate Media

    NASA Astrophysics Data System (ADS)

    Altun, Selim; Sezer, Alper; Goktepe, A. Burak

    2016-12-01

    In this study, the influence of several fractal identifiers of granular materials on dynamic behavior of a flexible pavement structure as a particulate stratum is considered. Using experimental results and numerical methods as well, 15 different grain-shaped sands obtained from 5 different sources were analyzed as pavement base course materials. Image analyses were carried out by use of a stereomicroscope on 15 different samples to obtain quantitative particle shape information. Furthermore, triaxial compression tests were conducted to determine stress-strain and shear strength parameters of sands. Additionally, the dynamic response of the particulate media to standard traffic loads was computed using finite element modeling (FEM) technique. Using area-perimeter, line divider and box counting methods, over a hundred grains for each sand type were subjected to fractal analysis. Relationships among fractal dimension descriptors and dynamic strain levels were established for assessment of importance of shape descriptors of sands at various scales on the dynamic behavior. In this context, the advantage of fractal geometry concept to describe irregular and fractured shapes was used to characterize the sands used as base course materials. Results indicated that fractal identifiers can be preferred to analyze the effect of shape properties of sands on dynamic behavior of pavement base layers.

  17. Function representation with circle inversion map systems

    NASA Astrophysics Data System (ADS)

    Boreland, Bryson; Kunze, Herb

    2017-01-01

    The fractals literature develops the now well-known concept of local iterated function systems (using affine maps) with grey-level maps (LIFSM) as an approach to function representation in terms of the associated fixed point of the so-called fractal transform. While originally explored as a method to achieve signal (and 2-D image) compression, more recent work has explored various aspects of signal and image processing using this machinery. In this paper, we develop a similar framework for function representation using circle inversion map systems. Given a circle C with centre õ and radius r, inversion with respect to C transforms the point p˜ to the point p˜', such that p˜ and p˜' lie on the same radial half-line from õ and d(õ, p˜)d(õ, p˜') = r2, where d is Euclidean distance. We demonstrate the results with an example.

  18. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  19. Fractal-Based Image Analysis In Radiological Applications

    NASA Astrophysics Data System (ADS)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  20. Unification of two fractal families

    NASA Astrophysics Data System (ADS)

    Liu, Ying

    1995-06-01

    Barnsley and Hurd classify the fractal images into two families: iterated function system fractals (IFS fractals) and fractal transform fractals, or local iterated function system fractals (LIFS fractals). We will call IFS fractals, class 2 fractals and LIFS fractals, class 3 fractals. In this paper, we will unify these two approaches plus another family of fractals, the class 5 fractals. The basic idea is given as follows: a dynamical system can be represented by a digraph, the nodes in a digraph can be divided into two parts: transient states and persistent states. For bilevel images, a persistent node is a black pixel. A transient node is a white pixel. For images with more than two gray levels, a stochastic digraph is used. A transient node is a pixel with the intensity of 0. The intensity of a persistent node is determined by a relative frequency. In this way, the two families of fractals can be generated in a similar way. In this paper, we will first present a classification of dynamical systems and introduce the transformation based on digraphs, then we will unify the two approaches for fractal binary images. We will compare the decoding algorithms of the two families. Finally, we will generalize the discussion to continuous-tone images.

  1. Trabecular Bone Mechanical Properties and Fractal Dimension

    NASA Technical Reports Server (NTRS)

    Hogan, Harry A.

    1996-01-01

    Countermeasures for reducing bone loss and muscle atrophy due to extended exposure to the microgravity environment of space are continuing to be developed and improved. An important component of this effort is finite element modeling of the lower extremity and spinal column. These models will permit analysis and evaluation specific to each individual and thereby provide more efficient and effective exercise protocols. Inflight countermeasures and post-flight rehabilitation can then be customized and targeted on a case-by-case basis. Recent Summer Faculty Fellowship participants have focused upon finite element mesh generation, muscle force estimation, and fractal calculations of trabecular bone microstructure. Methods have been developed for generating the three-dimensional geometry of the femur from serial section magnetic resonance images (MRI). The use of MRI as an imaging modality avoids excessive exposure to radiation associated with X-ray based methods. These images can also detect trabecular bone microstructure and architecture. The goal of the current research is to determine the degree to which the fractal dimension of trabecular architecture can be used to predict the mechanical properties of trabecular bone tissue. The elastic modulus and the ultimate strength (or strain) can then be estimated from non-invasive, non-radiating imaging and incorporated into the finite element models to more accurately represent the bone tissue of each individual of interest. Trabecular bone specimens from the proximal tibia are being studied in this first phase of the work. Detailed protocols and procedures have been developed for carrying test specimens through all of the steps of a multi-faceted test program. The test program begins with MRI and X-ray imaging of the whole bones before excising a smaller workpiece from the proximal tibia region. High resolution MRI scans are then made and the piece further cut into slabs (roughly 1 cm thick). The slabs are X-rayed again and also scanned using dual-energy X-ray absorptiometry (DEXA). Cube specimens are then cut from the slabs and tested mechanically in compression. Correlations between mechanical properties and fractal dimension will then be examined to assess and quantify the predictive capability of the fractal calculations.

  2. Application to recognition of ferrography image with fractal neural network

    NASA Astrophysics Data System (ADS)

    Tian, Xianzhong; Hu, Tongsen; Zhang, Jian

    2005-10-01

    Because wear particles have fractal characteristics, it is necessary that adding fractal parameters to studying wear particles and diagnosing machine troubles. This paper discusses fractal parameters of wear particles, presents arithmetic calculating fractal dimension, and constructs a fractal neural network which can recognize wear particles image. It is proved by experiments that this fractal neural network can recognize some characteristics of wear particles image, and can also classify wear types.

  3. Pre-Service Teachers' Concept Images on Fractal Dimension

    ERIC Educational Resources Information Center

    Karakus, Fatih

    2016-01-01

    The analysis of pre-service teachers' concept images can provide information about their mental schema of fractal dimension. There is limited research on students' understanding of fractal and fractal dimension. Therefore, this study aimed to investigate the pre-service teachers' understandings of fractal dimension based on concept image. The…

  4. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  5. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  6. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis

    PubMed Central

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878

  7. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.

    PubMed

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.

  8. Fractal Loop Heat Pipe Performance Comparisons of a Soda Lime Glass and Compressed Carbon Foam Wick

    NASA Technical Reports Server (NTRS)

    Myre, David; Silk, Eric A.

    2014-01-01

    This study compares heat flux performance of a Loop Heat Pipe (LHP) wick structure fabricated from compressed carbon foam with that of a wick structure fabricated from sintered soda lime glass. Each wick was used in an LHP containing a fractal based evaporator. The Fractal Loop Heat Pipe (FLHP) was designed and manufactured by Mikros Manufacturing Inc. The compressed carbon foam wick structure was manufactured by ERG Aerospace Inc., and machined to specifications comparable to that of the initial soda lime glass wick structure. Machining of the compressed foam as well as performance testing was conducted at the United States Naval Academy. Performance testing with the sintered soda lime glass wick structures was conducted at NASA Goddard Space Flight Center. Heat input for both wick structures was supplied via cartridge heaters mounted in a copper block. The copper heater block was placed in contact with the FLHP evaporator which had a circular cross-sectional area of 0.88 cm(sup 2). Twice distilled, deionized water was used as the working fluid in both sets of experiments. Thermal performance data was obtained for three different Condenser/Subcooler temperatures under degassed conditions. Both wicks demonstrated comparable heat flux performance with a maximum of 75 W/cm observed for the soda lime glass wick and 70 W /cm(sup 2) for the compressed carbon foam wick.

  9. Fractal analysis and its impact factors on pore structure of artificial cores based on the images obtained using magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong

    2012-11-01

    Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).

  10. Investigation into How 8th Grade Students Define Fractals

    ERIC Educational Resources Information Center

    Karakus, Fatih

    2015-01-01

    The analysis of 8th grade students' concept definitions and concept images can provide information about their mental schema of fractals. There is limited research on students' understanding and definitions of fractals. Therefore, this study aimed to investigate the elementary students' definitions of fractals based on concept image and concept…

  11. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.

    1998-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.

  12. A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image

    NASA Astrophysics Data System (ADS)

    Su, Junying

    2011-11-01

    A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.

  13. Applications of Fractal Analytical Techniques in the Estimation of Operational Scale

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Quattrochi, Dale A.

    2000-01-01

    The observational scale and the resolution of remotely sensed imagery are essential considerations in the interpretation process. Many atmospheric, hydrologic, and other natural and human-influenced spatial phenomena are inherently scale dependent and are governed by different physical processes at different spatial domains. This spatial and operational heterogeneity constrains the ability to compare interpretations of phenomena and processes observed in higher spatial resolution imagery to similar interpretations obtained from lower resolution imagery. This is a particularly acute problem, since longterm global change investigations will require high spatial resolution Earth Observing System (EOS), Landsat 7, or commercial satellite data to be combined with lower resolution imagery from older sensors such as Landsat TM and MSS. Fractal analysis is a useful technique for identifying the effects of scale changes on remotely sensed imagery. The fractal dimension of an image is a non-integer value between two and three which indicates the degree of complexity in the texture and shapes depicted in the image. A true fractal surface exhibits self-similarity, a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution, and the slope of the fractal dimension-resolution relationship would be zero. Most geographical phenomena, however, are not self-similar at all scales, but they can be modeled by a stochastic fractal in which the scaling properties of the image exhibit patterns that can be described by statistics such as area-perimeter ratios and autocovariances. Stochastic fractal sets relax the self-similarity assumption and measure many scales and resolutions to represent the varying form of a phenomenon as the pixel size is increased in a convolution process. We have observed that for images of homogeneous land covers, the fractal dimension varies linearly with changes in resolution or pixel size over the range of past, current, and planned space-borne sensors. This relationship differs significantly in images of agricultural, urban, and forest land covers, with urban areas retaining the same level of complexity, forested areas growing smoother, and agricultural areas growing more complex as small pixels are aggregated into larger, mixed pixels. Images of scenes having a mixture of land covers have fractal dimensions that exhibit a non-linear, complex relationship to pixel size. Measuring the fractal dimension of a difference image derived from two images of the same area obtained on different dates showed that the fractal dimension increased steadily, then exhibited a sharp decrease at increasing levels of pixel aggregation. This breakpoint of the fractal dimension/resolution plot is related to the spatial domain or operational scale of the phenomenon exhibiting the predominant visible difference between the two images (in this case, mountain snow cover). The degree to which an image departs from a theoretical ideal fractal surface provides clues as to how much information is altered or lost in the processes of rescaling and rectification. The measured fractal dimension of complex, composite land covers such as urban areas also provides a useful textural index that can assist image classification of complex scenes.

  14. Estimating Trabecular Bone Mechanical Properties From Non-Invasive Imaging

    NASA Technical Reports Server (NTRS)

    Hogan, Harry A.; Webster, Laurie

    1997-01-01

    An important component in developing countermeasures for maintaining musculoskeletal integrity during long-term space flight is an effective and meaningful method of monitoring skeletal condition. Magnetic resonance imaging (MRI) is an attractive non-invasive approach because it avoids the exposure to radiation associated with X-ray based imaging and also provides measures related to bone microstructure rather than just density. The purpose of the research for the 1996 Summer Faculty Fellowship period was to extend the usefulness of the MRI data to estimate the mechanical properties of trabecular bone. The main mechanical properties of interest are the elastic modulus and ultimate strength. Correlations are being investigated between these and fractal analysis parameters, MRI relaxation times, apparent densities, and bone mineral densities. Bone specimens from both human and equine donors have been studied initially to ensure high-quality MR images. Specimens were prepared and scanned from human proximal tibia bones as well as the equine distal radius. The quality of the images from the human bone appeared compromised due to freezing artifact, so only equine bone was included in subsequent procedures since these specimens could be acquired and imaged fresh before being frozen. MRI scans were made spanning a 3.6 cm length on each of 5 equine distal radius specimens. The images were then sent to Dr. Raj Acharya of the State University of New York at Buffalo for fractal analysis. Each piece was cut into 3 slabs approximately 1.2 cm thick and high-resolution contact radiographs were made to provide images for comparing fractal analysis with MR images. Dual energy X-ray absorptiometry (DEXA) scans were also made of each slab for subsequent bone mineral density determination. Slabs were cut into cubes for mechanical using a slow-speed diamond blade wafering saw (Buehler Isomet). The dimensions and wet weights of each cube specimen were measured and recorded. Wet weights were also recorded. Each specimen was labeled and marked to denote anatomic orientations, i.e. superior/inferior (S/I), media/lateral (M/L), and anterior/posterior (A/P). The actual locations of each cube cut were documented and images distributed to define ROI locations for other analyses (to Raj Acharya for fractal analysis, to Jon Richardson at Baylor College of Medicine for DEXA, and to Chen Lin at Baylor College of Medicine for T2* MRI analysis). Quasistatic mechanical testing consisted of compressive loading in all three mutually perpendicular anatomic directions. Cyclic loading was applied for 10 cycles to precondition the specimen and results calculated for the eleventh. For one of three directions tested on each specimen, the 10 cycles were followed with loading to failure. Testing is currently proceeding and once completed the results will be correlated with data from the other analyses. One of the main points of interest is the relationship between fractal dimension and mechanical properties. Throughout preparation and testing all specimens were maintained hydrated with physiological saline and stored frozen when not being used.

  15. Performance assessment of methods for estimation of fractal dimension from scanning electron microscope images.

    PubMed

    Risović, Dubravko; Pavlović, Zivko

    2013-01-01

    Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.

  16. Micro and MACRO Fractals Generated by Multi-Valued Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Banakh, T.; Novosad, N.

    2014-08-01

    Given a multi-valued function Φ : X \\mumap X on a topological space X we study the properties of its fixed fractal \\malteseΦ, which is defined as the closure of the orbit Φω(*Φ) = ⋃n∈ωΦn(*Φ) of the set *Φ = {x ∈ X : x ∈ Φ(x)} of fixed points of Φ. A special attention is paid to the duality between micro-fractals and macro-fractals, which are fixed fractals \\maltese Φ and \\maltese {Φ -1} for a contracting compact-valued function Φ : X \\mumap X on a complete metric space X. With help of algorithms (described in this paper) we generate various images of macro-fractals which are dual to some well-known micro-fractals like the fractal cross, the Sierpiński triangle, Sierpiński carpet, the Koch curve, or the fractal snowflakes. The obtained images show that macro-fractals have a large-scale fractal structure, which becomes clearly visible after a suitable zooming.

  17. MORPH-II, a software package for the analysis of scanning-electron-micrograph images for the assessment of the fractal dimension of exposed stone surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf

    2000-01-01

    Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.

  18. Analysis of the fractal dimension of volcano geomorphology through Synthetic Aperture Radar (SAR) amplitude images acquired in C and X band.

    NASA Astrophysics Data System (ADS)

    Pepe, S.; Di Martino, G.; Iodice, A.; Manzo, M.; Pepe, A.; Riccio, D.; Ruello, G.; Sansosti, E.; Tizzani, P.; Zinno, I.

    2012-04-01

    In the last two decades several aspects relevant to volcanic activity have been analyzed in terms of fractal parameters that effectively describe natural objects geometry. More specifically, these researches have been aimed at the identification of (1) the power laws that governed the magma fragmentation processes, (2) the energy of explosive eruptions, and (3) the distribution of the associated earthquakes. In this paper, the study of volcano morphology via satellite images is dealt with; in particular, we use the complete forward model developed by some of the authors (Di Martino et al., 2012) that links the stochastic characterization of amplitude Synthetic Aperture Radar (SAR) images to the fractal dimension of the imaged surfaces, modelled via fractional Brownian motion (fBm) processes. Based on the inversion of such a model, a SAR image post-processing has been implemented (Di Martino et al., 2010), that allows retrieving the fractal dimension of the observed surfaces, dictating the distribution of the roughness over different spatial scales. The fractal dimension of volcanic structures has been related to the specific nature of materials and to the effects of active geodynamic processes. Hence, the possibility to estimate the fractal dimension from a single amplitude-only SAR image is of fundamental importance for the characterization of volcano structures and, moreover, can be very helpful for monitoring and crisis management activities in case of eruptions and other similar natural hazards. The implemented SAR image processing performs the extraction of the point-by-point fractal dimension of the scene observed by the sensor, providing - as an output product - the map of the fractal dimension of the area of interest. In this work, such an analysis is performed on Cosmo-SkyMed, ERS-1/2 and ENVISAT images relevant to active stratovolcanoes in different geodynamic contexts, such as Mt. Somma-Vesuvio, Mt. Etna, Vulcano and Stromboli in Southern Italy, Shinmoe in Japan, Merapi in Indonesia. Preliminary results reveal that the fractal dimension of natural areas, being related only to the roughness of the observed surface, is very stable as the radar illumination geometry, the resolution and the wavelength change, thus holding a very unique property in SAR data inversion. Such a behavior is not verified in case of non-natural objects. As a matter of fact, when the fractal estimation is performed in the presence of either man-made objects or SAR image features depending on geometrical distortions due to the SAR system acquisition (i.e. layover, shadowing), fractal dimension (D) values outside the range of fractality of natural surfaces (2 < D < 3) are retrieved. These non-fractal characteristics show to be heavily dependent on sensor acquisition parameters (e.g. view angle, resolution). In this work, the behaviour of the maps generated starting from the C- and X- band SAR data, relevant to all the considered volcanoes, is analyzed: the distribution of the obtained fractal dimension values is investigated on different zones of the maps. In particular, it is verified that the fore-slope and back-slope areas of the image share a very similar fractal dimension distribution that is placed around the mean value of D=2.3. We conclude that, in this context, the fractal dimension could be considered as a signature of the identification of the volcano growth as a natural process. The COSMO-SkyMed data used in this study have been processed at IREA-CNR within the SAR4Volcanoes project under Italian Space Agency agreement n. I/034/11/0.

  19. Fractal analysis as a potential tool for surface morphology of thin films

    NASA Astrophysics Data System (ADS)

    Soumya, S.; Swapna, M. S.; Raj, Vimal; Mahadevan Pillai, V. P.; Sankararaman, S.

    2017-12-01

    Fractal geometry developed by Mandelbrot has emerged as a potential tool for analyzing complex systems in the diversified fields of science, social science, and technology. Self-similar objects having the same details in different scales are referred to as fractals and are analyzed using the mathematics of non-Euclidean geometry. The present work is an attempt to correlate fractal dimension for surface characterization by Atomic Force Microscopy (AFM). Taking the AFM images of zinc sulphide (ZnS) thin films prepared by pulsed laser deposition (PLD) technique, under different annealing temperatures, the effect of annealing temperature and surface roughness on fractal dimension is studied. The annealing temperature and surface roughness show a strong correlation with fractal dimension. From the regression equation set, the surface roughness at a given annealing temperature can be calculated from the fractal dimension. The AFM images are processed using Photoshop and fractal dimension is calculated by box-counting method. The fractal dimension decreases from 1.986 to 1.633 while the surface roughness increases from 1.110 to 3.427, for a change of annealing temperature 30 ° C to 600 ° C. The images are also analyzed by power spectrum method to find the fractal dimension. The study reveals that the box-counting method gives better results compared to the power spectrum method.

  20. An Evaluation of Fractal Surface Measurement Methods for Characterizing Landscape Complexity from Remote-Sensing Imagery

    NASA Technical Reports Server (NTRS)

    Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.

  1. Detection and classification of Breast Cancer in Wavelet Sub-bands of Fractal Segmented Cancerous Zones.

    PubMed

    Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri

    2015-01-01

    Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and utilized in radial basis function neural network. Our simulation results indicate the accuracy of 92% classification using F1W2 method.

  2. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    NASA Astrophysics Data System (ADS)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  3. Fractal scaling of apparent soil moisture estimated from vertical planes of Vertisol pit images

    NASA Astrophysics Data System (ADS)

    Cumbrera, Ramiro; Tarquis, Ana M.; Gascó, Gabriel; Millán, Humberto

    2012-07-01

    SummaryImage analysis could be a useful tool for investigating the spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to define apparent soil moisture patterns from vertical planes of Vertisol pit images and (ii) to describe the scaling of apparent soil moisture distribution using fractal parameters. Twelve soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. Six of them were excavated in April/2011 and six pits were established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak™ digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ≈373 μm of the photographed soil pit. Each soil image was analyzed using two fractal scaling exponents, box counting (capacity) dimension (DBC) and interface fractal dimension (Di), and three prefractal scaling coefficients, the total number of boxes intercepting the foreground pattern at a unit scale (A), fractal lacunarity at the unit scale (Λ1) and Shannon entropy at the unit scale (S1). All the scaling parameters identified significant differences between both sets of spatial patterns. Fractal lacunarity was the best discriminator between apparent soil moisture patterns. Soil image interpretation with fractal exponents and prefractal coefficients can be incorporated within a site-specific agriculture toolbox. While fractal exponents convey information on space filling characteristics of the pattern, prefractal coefficients represent the investigated soil property as seen through a higher resolution microscope. In spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used in connection with traditional soil moisture sampling, which always renders punctual estimates.

  4. A Double-Minded Fractal

    ERIC Educational Resources Information Center

    Simoson, Andrew J.

    2009-01-01

    This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)

  5. A Fractal Analysis of CT Liver Images for the Discrimination of Hepatic Lesions: A Comparative Study

    DTIC Science & Technology

    2001-10-25

    liver images in order to estimate their fractal dimension and to differentiate normal liver parenchyma from hepatocellular carcinoma . Four fractal...methods; thus discriminating up to 93% of the normal parenchyma and up to 82% of the hepatocellular carcinoma , correctly.

  6. Multispectral image fusion based on fractal features

    NASA Astrophysics Data System (ADS)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.

  7. Fractal analysis of bone structure with applications to osteoporosis and microgravity effects

    NASA Astrophysics Data System (ADS)

    Acharya, Raj S.; LeBlanc, Adrian; Shackelford, Linda; Swarnakar, Vivek; Krishnamurthy, Ram; Hausman, E.; Lin, Chin-Shoou

    1995-05-01

    We characterize the trabecular structure with the aid of fractal dimension. We use alternating sequential filters (ASF) to generate a nonlinear pyramid for fractal dimension computations. We do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of our scheme is the rudimentary definition of self-similarity. This allows us the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, we have shown that the ASF methods outperform other existing methods for fractal dimension estimation. We have shown that the fractal dimension remains the same when computed with both the x-ray images and the MRI images of the patella. We have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, we have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, we have shown that the subject's prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.

  8. Fractal analysis of bone structure with applications to osteoporosis and microgravity effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, R.S.; Swarnarkar, V.; Krishnamurthy, R.

    1995-12-31

    The authors characterize the trabecular structure with the aid of fractal dimension. The authors use Alternating Sequential filters to generate a nonlinear pyramid for fractal dimension computations. The authors do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of the scheme is the rudimentary definition of self similarity. This allows them the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, the authors have shown that the ASF methods outperform other existing methods for fractal dimension estimation. They have shown that the fractal dimension remains the same whenmore » computed with both the X-Ray images and the MRI images of the patella. They have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, the authors have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, they have shown that the subject`s prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.« less

  9. Image characterization by fractal descriptors in variational mode decomposition domain: Application to brain magnetic resonance

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-08-01

    The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.

  10. Analysis of fractal dimensions of rat bones from film and digital images

    NASA Technical Reports Server (NTRS)

    Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Yamauchi, M.

    2001-01-01

    OBJECTIVES: (1) To compare the effect of two different intra-oral image receptors on estimates of fractal dimension; and (2) to determine the variations in fractal dimensions between the femur, tibia and humerus of the rat and between their proximal, middle and distal regions. METHODS: The left femur, tibia and humerus from 24 4-6-month-old Sprague-Dawley rats were radiographed using intra-oral film and a charge-coupled device (CCD). Films were digitized at a pixel density comparable to the CCD using a flat-bed scanner. Square regions of interest were selected from proximal, middle, and distal regions of each bone. Fractal dimensions were estimated from the slope of regression lines fitted to plots of log power against log spatial frequency. RESULTS: The fractal dimensions estimates from digitized films were significantly greater than those produced from the CCD (P=0.0008). Estimated fractal dimensions of three types of bone were not significantly different (P=0.0544); however, the three regions of bones were significantly different (P=0.0239). The fractal dimensions estimated from radiographs of the proximal and distal regions of the bones were lower than comparable estimates obtained from the middle region. CONCLUSIONS: Different types of image receptors significantly affect estimates of fractal dimension. There was no difference in the fractal dimensions of the different bones but the three regions differed significantly.

  11. System considerations for efficient communication and storage of MSTI image data

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1994-01-01

    The Ballistic Missile Defense Organization has been developing the capability to evaluate one or more high-rate sensor/hardware combinations by incorporating them as payloads on a series of Miniature Seeker Technology Insertion (MSTI) flights. This publication represents the final report of a 1993 study to analyze the potential impact f data compression and of related communication system technologies on post-MSTI 3 flights. Lossless compression is considered alone and in conjunction with various spatial editing modes. Additionally, JPEG and Fractal algorithms are examined in order to bound the potential gains from the use of lossy compression. but lossless compression is clearly shown to better fit the goals of the MSTI investigations. Lossless compression factors of between 2:1 and 6:1 would provide significant benefits to both on-board mass memory and the downlink. for on-board mass memory, the savings could range from $5 million to $9 million. Such benefits should be possible by direct application of recently developed NASA VLSI microcircuits. It is shown that further downlink enhancements of 2:1 to 3:1 should be feasible thorough use of practical modifications to the existing modulation system and incorporation of Reed-Solomon channel coding. The latter enhancement could also be achieved by applying recently developed VLSI microcircuits.

  12. Crack image segmentation based on improved DBC method

    NASA Astrophysics Data System (ADS)

    Cao, Ting; Yang, Nan; Wang, Fengping; Gao, Ting; Wang, Weixing

    2017-11-01

    With the development of computer vision technology, crack detection based on digital image segmentation method arouses global attentions among researchers and transportation ministries. Since the crack always exhibits the random shape and complex texture, it is still a challenge to accomplish reliable crack detection results. Therefore, a novel crack image segmentation method based on fractal DBC (differential box counting) is introduced in this paper. The proposed method can estimate every pixel fractal feature based on neighborhood information which can consider the contribution from all possible direction in the related block. The block moves just one pixel every time so that it could cover all the pixels in the crack image. Unlike the classic DBC method which only describes fractal feature for the related region, this novel method can effectively achieve crack image segmentation according to the fractal feature of every pixel. The experiment proves the proposed method can achieve satisfactory results in crack detection.

  13. Singular spectrum decomposition of Bouligand-Minkowski fractal descriptors: an application to the classification of texture Images

    NASA Astrophysics Data System (ADS)

    Florindo, João. Batista

    2018-04-01

    This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.

  14. Time Series Analysis OF SAR Image Fractal Maps: The Somma-Vesuvio Volcanic Complex Case Study

    NASA Astrophysics Data System (ADS)

    Pepe, Antonio; De Luca, Claudio; Di Martino, Gerardo; Iodice, Antonio; Manzo, Mariarosaria; Pepe, Susi; Riccio, Daniele; Ruello, Giuseppe; Sansosti, Eugenio; Zinno, Ivana

    2016-04-01

    The fractal dimension is a significant geophysical parameter describing natural surfaces representing the distribution of the roughness over different spatial scale; in case of volcanic structures, it has been related to the specific nature of materials and to the effects of active geodynamic processes. In this work, we present the analysis of the temporal behavior of the fractal dimension estimates generated from multi-pass SAR images relevant to the Somma-Vesuvio volcanic complex (South Italy). To this aim, we consider a Cosmo-SkyMed data-set of 42 stripmap images acquired from ascending orbits between October 2009 and December 2012. Starting from these images, we generate a three-dimensional stack composed by the corresponding fractal maps (ordered according to the acquisition dates), after a proper co-registration. The time-series of the pixel-by-pixel estimated fractal dimension values show that, over invariant natural areas, the fractal dimension values do not reveal significant changes; on the contrary, over urban areas, it correctly assumes values outside the natural surfaces fractality range and show strong fluctuations. As a final result of our analysis, we generate a fractal map that includes only the areas where the fractal dimension is considered reliable and stable (i.e., whose standard deviation computed over the time series is reasonably small). The so-obtained fractal dimension map is then used to identify areas that are homogeneous from a fractal viewpoint. Indeed, the analysis of this map reveals the presence of two distinctive landscape units corresponding to the Mt. Vesuvio and Gran Cono. The comparison with the (simplified) geological map clearly shows the presence in these two areas of volcanic products of different age. The presented fractal dimension map analysis demonstrates the ability to get a figure about the evolution degree of the monitored volcanic edifice and can be profitably extended in the future to other volcanic systems with very distinctive characteristics, with the aim to perform land classification, such as the identification of areas characterized by similar soil use, slopes and exposures.

  15. Characterization of Atrophic Changes in the Cerebral Cortex Using Fractal Dimensional Analysis

    PubMed Central

    George, Anuh T.; Jeon, Tina; Hynan, Linda S.; Youn, Teddy S.; Kennedy, David N.; Dickerson, Bradford

    2010-01-01

    The purpose of this project is to apply a modified fractal analysis technique to high-resolution T1 weighted magnetic resonance images in order to quantify the alterations in the shape of the cerebral cortex that occur in patients with Alzheimer’s disease. Images were selected from the Alzheimer’s Disease Neuroimaging Initiative database (Control N=15, Mild-Moderate AD N=15). The images were segmented using a semi-automated analysis program. Four coronal and three axial profiles of the cerebral cortical ribbon were created. The fractal dimensions (Df) of the cortical ribbons were then computed using a box-counting algorithm. The mean Df of the cortical ribbons from AD patients were lower than age-matched controls on six of seven profiles. The fractal measure has regional variability which reflects local differences in brain structure. Fractal dimension is complementary to volumetric measures and may assist in identifying disease state or disease progression. PMID:20740072

  16. Hands-On Fractals and the Unexpected in Mathematics

    ERIC Educational Resources Information Center

    Gluchoff, Alan

    2006-01-01

    This article describes a hands-on project in which unusual fractal images are produced using only a photocopy machine and office supplies. The resulting images are an example of the contraction mapping principle.

  17. Edge detection of optical subaperture image based on improved differential box-counting method

    NASA Astrophysics Data System (ADS)

    Li, Yi; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-01-01

    Optical synthetic aperture imaging technology is an effective approach to improve imaging resolution. Compared with monolithic mirror system, the image of optical synthetic aperture system is often more complex at the edge, and as a result of the existence of gap between segments, which makes stitching becomes a difficult problem. So it is necessary to extract the edge of subaperture image for achieving effective stitching. Fractal dimension as a measure feature can describe image surface texture characteristics, which provides a new approach for edge detection. In our research, an improved differential box-counting method is used to calculate fractal dimension of image, then the obtained fractal dimension is mapped to grayscale image to detect edges. Compared with original differential box-counting method, this method has two improvements as follows: by modifying the box-counting mechanism, a box with a fixed height is replaced by a box with adaptive height, which solves the problem of over-counting the number of boxes covering image intensity surface; an image reconstruction method based on super-resolution convolutional neural network is used to enlarge small size image, which can solve the problem that fractal dimension can't be calculated accurately under the small size image, and this method may well maintain scale invariability of fractal dimension. The experimental results show that the proposed algorithm can effectively eliminate noise and has a lower false detection rate compared with the traditional edge detection algorithms. In addition, this algorithm can maintain the integrity and continuity of image edge in the case of retaining important edge information.

  18. [Modeling continuous scaling of NDVI based on fractal theory].

    PubMed

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  19. Fractal and Gray Level Cooccurrence Matrix Computational Analysis of Primary Osteosarcoma Magnetic Resonance Images Predicts the Chemotherapy Response.

    PubMed

    Djuričić, Goran J; Radulovic, Marko; Sopta, Jelena P; Nikitović, Marina; Milošević, Nebojša T

    2017-01-01

    The prediction of induction chemotherapy response at the time of diagnosis may improve outcomes in osteosarcoma by allowing for personalized tailoring of therapy. The aim of this study was thus to investigate the predictive potential of the so far unexploited computational analysis of osteosarcoma magnetic resonance (MR) images. Fractal and gray level cooccurrence matrix (GLCM) algorithms were employed in retrospective analysis of MR images of primary osteosarcoma localized in distal femur prior to the OsteoSa induction chemotherapy. The predicted and actual chemotherapy response outcomes were then compared by means of receiver operating characteristic (ROC) analysis and accuracy calculation. Dbin, Λ, and SCN were the standard fractal and GLCM features which significantly associated with the chemotherapy outcome, but only in one of the analyzed planes. Our newly developed normalized fractal dimension, called the space-filling ratio (SFR) exerted an independent and much better predictive value with the prediction significance accomplished in two of the three imaging planes, with accuracy of 82% and area under the ROC curve of 0.20 (95% confidence interval 0-0.41). In conclusion, SFR as the newly designed fractal coefficient provided superior predictive performance in comparison to standard image analysis features, presumably by compensating for the tumor size variation in MR images.

  20. Method for non-referential defect characterization using fractal encoding and active contours

    DOEpatents

    Gleason, Shaun S [Knoxville, TN; Sari-Sarraf, Hamed [Lubbock, TX

    2007-05-15

    A method for identification of anomalous structures, such as defects, includes the steps of providing a digital image and applying fractal encoding to identify a location of at least one anomalous portion of the image. The method does not require a reference image to identify the location of the anomalous portion. The method can further include the step of initializing an active contour based on the location information obtained from the fractal encoding step and deforming an active contour to enhance the boundary delineation of the anomalous portion.

  1. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight (CXIDB ID 16)

    DOE Data Explorer

    Loh, N. Duane

    2012-06-20

    This deposition includes the aerosol diffraction images used for phasing, fractal morphology, and time-of-flight mass spectrometry. Files in this deposition are ordered in subdirectories that reflect the specifics.

  2. A Comparison of Local Variance, Fractal Dimension, and Moran's I as Aids to Multispectral Image Classification

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.

    2004-01-01

    The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.

  3. Multitemporal and Multiscaled Fractal Analysis of Landsat Satellite Data Using the Image Characterization and Modeling System (ICAMS)

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.

    1997-01-01

    The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.

  4. Effect of deformation on the thermal conductivity of granular porous media with rough grain surface

    NASA Astrophysics Data System (ADS)

    Askari, Roohollah; Hejazi, S. Hossein; Sahimi, Muhammad

    2017-08-01

    Heat transfer in granular porous media is an important phenomenon that is relevant to a wide variety of problems, including geothermal reservoirs and enhanced oil recovery by thermal methods. Resistance to flow of heat in the contact area between the grains strongly influences the effective thermal conductivity of such porous media. Extensive experiments have indicated that the roughness of the grains' surface follows self-affine fractal stochastic functions, and thus, the contact resistance cannot be accounted for by models based on smooth surfaces. Despite the significance of rough contact area, the resistance has been accounted for by a fitting parameter in the models of heat transfer. In this Letter we report on a study of conduction in a packing of particles that contains a fluid of a given conductivity, with each grain having a rough self-affine surface, and is under an external compressive pressure. The deformation of the contact area depends on the fractal dimension that characterizes the grains' rough surface, as well as their Young's modulus. Excellent qualitative agreement is obtained with experimental data. Deformation of granular porous media with grains that have rough self-affine fractal surface is simulated. Thermal contact resistance between grains with rough surfaces is incorporated into the numerical simulation of heat conduction under compressive pressure. By increasing compressive pressure, thermal conductivity is enhanced more in the grains with smoother surfaces and lower Young's modulus. Excellent qualitative agreement is obtained with the experimental data.

  5. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    PubMed

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  6. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    NASA Astrophysics Data System (ADS)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  7. [Fractal research of neurite growth in immunofluorescent images].

    PubMed

    Tang, Min; Wang, Huinan

    2008-12-01

    Fractal dimension has been widely used in medical images processing and analysis. The neurite growth of cultured dorsal root ganglion (DRG) was detected by fluorescent immunocytochemistry treated with nerve regeneration factor (0.1, 0.5, 2.0 mg/L). A novel method based on triangular prism surface area (TPSA) was introduced and adopted to calculate the fractal dimension of the two-dimensional immunofluorescent images. Experimental results demonstrate that this method is easy to understand and convenient to operate, and the quantititve results are concordant with the observational findings under microscope. This method can be guidelines for analyzing and deciding experimental results.

  8. Implicit Weight Bias in Children Age 9 to 11 Years.

    PubMed

    Skinner, Asheley Cockrell; Payne, Keith; Perrin, Andrew J; Panter, Abigail T; Howard, Janna B; Bardone-Cone, Anna; Bulik, Cynthia M; Steiner, Michael J; Perrin, Eliana M

    2017-07-01

    Assess implicit weight bias in children 9 to 11 years old. Implicit weight bias was measured in children ages 9 to 11 ( N = 114) by using the Affect Misattribution Procedure. Participants were shown a test image of a child for 350 milliseconds followed by a meaningless fractal (200 milliseconds), and then they were asked to rate the fractal image as "good" or "bad." We used 9 image pairs matched on age, race, sex, and activity but differing by weight of the child. Implicit bias was the difference between positive ratings for fractals preceded by an image of a healthy-weight child and positive ratings for fractals preceded by an image of an overweight child. On average, 64% of abstract fractals shown after pictures of healthy-weight children were rated as "good," compared with 59% of those shown after pictures of overweight children, reflecting an overall implicit bias rate of 5.4% against overweight children ( P < .001). Healthy-weight participants showed greater implicit bias than over- and underweight participants (7.9%, 1.4%, and 0.3% respectively; P = .049). Implicit bias toward overweight individuals is evident in children aged 9 to 11 years with a magnitude of implicit bias (5.4%) similar to that in studies of implicit racial bias among adults. Copyright © 2017 by the American Academy of Pediatrics.

  9. Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors.

    PubMed

    da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2015-01-01

    The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification.

  10. Texture segmentation of non-cooperative spacecrafts images based on wavelet and fractal dimension

    NASA Astrophysics Data System (ADS)

    Wu, Kanzhi; Yue, Xiaokui

    2011-06-01

    With the increase of on-orbit manipulations and space conflictions, missions such as tracking and capturing the target spacecrafts are aroused. Unlike cooperative spacecrafts, fixing beacons or any other marks on the targets is impossible. Due to the unknown shape and geometry features of non-cooperative spacecraft, in order to localize the target and obtain the latitude, we need to segment the target image and recognize the target from the background. The data and errors during the following procedures such as feature extraction and matching can also be reduced. Multi-resolution analysis of wavelet theory reflects human beings' recognition towards images from low resolution to high resolution. In addition, spacecraft is the only man-made object in the image compared to the natural background and the differences will be certainly observed between the fractal dimensions of target and background. Combined wavelet transform and fractal dimension, in this paper, we proposed a new segmentation algorithm for the images which contains complicated background such as the universe and planet surfaces. At first, Daubechies wavelet basis is applied to decompose the image in both x axis and y axis, thus obtain four sub-images. Then, calculate the fractal dimensions in four sub-images using different methods; after analyzed the results of fractal dimensions in sub-images, we choose Differential Box Counting in low resolution image as the principle to segment the texture which has the greatest divergences between different sub-images. This paper also presents the results of experiments by using the algorithm above. It is demonstrated that an accurate texture segmentation result can be obtained using the proposed technique.

  11. GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION

    NASA Astrophysics Data System (ADS)

    Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič

    2015-10-01

    In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.

  12. Fractal Image Filters for Specialized Image Recognition Tasks

    DTIC Science & Technology

    2010-02-11

    butter sets of fractal geometers, such as Sierpinski triangles, twin- dragons , Koch curves, Cantor sets, fractal ferns, and so on. The geometries and...K is a centrally symmetric convex body in R m, then the function ‖x‖K defines a norm on R m. Moreover, the set K is the unit ball with respect to the...positive numbers r and R such that K contains a ball of radius r and is contained in a ball of radius R, the following proposition is clear

  13. Exsanguinated blood volume estimation using fractal analysis of digital images.

    PubMed

    Sant, Sonia P; Fairgrieve, Scott I

    2012-05-01

    The estimation of bloodstain volume using fractal analysis of digital images of passive blood stains is presented. Binary digital photos of bloodstains of known volumes (ranging from 1 to 7 mL), dispersed in a defined area, were subjected to image analysis using FracLac V. 2.0 for ImageJ. The box-counting method was used to generate a fractal dimension for each trial. A positive correlation between the generated fractal number and the volume of blood was found (R(2) = 0.99). Regression equations were produced to estimate the volume of blood in blind trials. An error rate ranging from 78% for 1 mL to 7% for 6 mL demonstrated that as the volume increases so does the accuracy of the volume estimation. This method used in the preliminary study proved that bloodstain patterns may be deconstructed into mathematical parameters, thus removing the subjective element inherent in other methods of volume estimation. © 2012 American Academy of Forensic Sciences.

  14. Non-US data compression and coding research. FASAC Technical Assessment Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, R.M.; Cohn, M.; Craver, L.W.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity,more » though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.« less

  15. Mathematical models used in segmentation and fractal methods of 2-D ultrasound images

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Moraru, Luminita; Bibicu, Dorin

    2012-11-01

    Mathematical models are widely used in biomedical computing. The extracted data from images using the mathematical techniques are the "pillar" achieving scientific progress in experimental, clinical, biomedical, and behavioural researches. This article deals with the representation of 2-D images and highlights the mathematical support for the segmentation operation and fractal analysis in ultrasound images. A large number of mathematical techniques are suitable to be applied during the image processing stage. The addressed topics cover the edge-based segmentation, more precisely the gradient-based edge detection and active contour model, and the region-based segmentation namely Otsu method. Another interesting mathematical approach consists of analyzing the images using the Box Counting Method (BCM) to compute the fractal dimension. The results of the paper provide explicit samples performed by various combination of methods.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Jingli; Chen, Cun; Wang, Gang

    This study explores the temporal scaling behavior induced shear-branching structure in response to variant temperatures and strain rates during plastic deformation of Zr-based bulk metallic glass (BMG). The data analysis based on the compression tests suggests that there are two states of shear-branching structures: the fractal structure with a long-range order at an intermediate temperature of 223 K and a larger strain rate of 2.5 × 10 –2 s –1; the disordered structure dominated at other temperature and strain rate. It can be deduced from the percolation theory that the compressive ductility, ec, can reach the maximum value at themore » intermediate temperature. Furthermore, a dynamical model involving temperature is given for depicting the shear-sliding process, reflecting the plastic deformation has fractal structure at the temperature of 223 K and strain rate of 2.5 × 10 –2 s –1.« less

  17. Buried mine detection using fractal geometry analysis to the LWIR successive line scan data image

    NASA Astrophysics Data System (ADS)

    Araki, Kan

    2012-06-01

    We have engaged in research on buried mine/IED detection by remote sensing method using LWIR camera. A IR image of a ground, containing buried objects can be assumed as a superimposed pattern including thermal scattering which may depend on the ground surface roughness, vegetation canopy, and effect of the sun light, and radiation due to various heat interaction caused by differences in specific heat, size, and buried depth of the objects and local temperature of their surrounding environment. In this cumbersome environment, we introduce fractal geometry for analyzing from an IR image. Clutter patterns due to these complex elements have oftentimes low ordered fractal dimension of Hausdorff Dimension. On the other hand, the target patterns have its tendency of obtaining higher ordered fractal dimension in terms of Information Dimension. Random Shuffle Surrogate method or Fourier Transform Surrogate method is used to evaluate fractional statistics by applying shuffle of time sequence data or phase of spectrum. Fractal interpolation to each line scan was also applied to improve the signal processing performance in order to evade zero division and enhance information of data. Some results of target extraction by using relationship between low and high ordered fractal dimension are to be presented.

  18. Multifractal modeling, segmentation, prediction, and statistical validation of posterior fossa tumors

    NASA Astrophysics Data System (ADS)

    Islam, Atiq; Iftekharuddin, Khan M.; Ogg, Robert J.; Laningham, Fred H.; Sivakumar, Bhuvaneswari

    2008-03-01

    In this paper, we characterize the tumor texture in pediatric brain magnetic resonance images (MRIs) and exploit these features for automatic segmentation of posterior fossa (PF) tumors. We focus on PF tumor because of the prevalence of such tumor in pediatric patients. Due to varying appearance in MRI, we propose to model the tumor texture with a multi-fractal process, such as a multi-fractional Brownian motion (mBm). In mBm, the time-varying Holder exponent provides flexibility in modeling irregular tumor texture. We develop a detailed mathematical framework for mBm in two-dimension and propose a novel algorithm to estimate the multi-fractal structure of tissue texture in brain MRI based on wavelet coefficients. This wavelet based multi-fractal feature along with MR image intensity and a regular fractal feature obtained using our existing piecewise-triangular-prism-surface-area (PTPSA) method, are fused in segmenting PF tumor and non-tumor regions in brain T1, T2, and FLAIR MR images respectively. We also demonstrate a non-patient-specific automated tumor prediction scheme based on these image features. We experimentally show the tumor discriminating power of our novel multi-fractal texture along with intensity and fractal features in automated tumor segmentation and statistical prediction. To evaluate the performance of our tumor prediction scheme, we obtain ROCs and demonstrate how sharply the curves reach the specificity of 1.0 sacrificing minimal sensitivity. Experimental results show the effectiveness of our proposed techniques in automatic detection of PF tumors in pediatric MRIs.

  19. Pond fractals in a tidal flat.

    PubMed

    Cael, B B; Lambert, Bennett; Bisson, Kelsey

    2015-11-01

    Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.

  20. Pond fractals in a tidal flat

    NASA Astrophysics Data System (ADS)

    Cael, B. B.; Lambert, Bennett; Bisson, Kelsey

    2015-11-01

    Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.

  1. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.

    PubMed

    Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-06-01

    This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.

  2. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method

    PubMed Central

    Huh, Kyung-Hoe; Baik, Jee-Seon; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-01-01

    Purpose This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Materials and Methods Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. Results The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. Conclusion The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm. PMID:21977478

  3. FAST TRACK COMMUNICATION: Weyl law for fat fractals

    NASA Astrophysics Data System (ADS)

    Spina, María E.; García-Mata, Ignacio; Saraceno, Marcos

    2010-10-01

    It has been conjectured that for a class of piecewise linear maps the closure of the set of images of the discontinuity has the structure of a fat fractal, that is, a fractal with positive measure. An example of such maps is the sawtooth map in the elliptic regime. In this work we analyze this problem quantum mechanically in the semiclassical regime. We find that the fraction of states localized on the unstable set satisfies a modified fractal Weyl law, where the exponent is given by the exterior dimension of the fat fractal.

  4. Holographic Characterization of Colloidal Fractal Aggregates

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Cheong, Fook Chiong; Ruffner, David B.; Zhong, Xiao; Ward, Michael D.; Grier, David G.

    In-line holographic microscopy images of micrometer-scale fractal aggregates can be interpreted with the Lorenz-Mie theory of light scattering and an effective-sphere model to obtain each aggregate's size and the population-averaged fractal dimension. We demonstrate this technique experimentally using model fractal clusters of polystyrene nanoparticles and fractal protein aggregates composed of bovine serum albumin and bovine pancreas insulin. This technique can characterize several thousand aggregates in ten minutes and naturally distinguishes aggregates from contaminants such as silicone oil droplets. Work supported by the SBIR program of the NSF.

  5. Characteristics of Crushing Energy and Fractal of Magnetite Ore under Uniaxial Compression

    NASA Astrophysics Data System (ADS)

    Gao, F.; Gan, D. Q.; Zhang, Y. B.

    2018-03-01

    The crushing mechanism of magnetite ore is a critical theoretical problem on the controlling of energy dissipation and machine crushing quality in ore material processing. Uniaxial crushing tests were carried out to research the deformation mechanism and the laws of the energy evolution, based on which the crushing mechanism of magnetite ore was explored. The compaction stage and plasticity and damage stage are two main compression deformation stages, the main transitional forms from inner damage to fracture are plastic deformation and stick-slip. In the process of crushing, plasticity and damage stage is the key link on energy absorption for that the specimen tends to saturate energy state approaching to the peak stress. The characteristics of specimen deformation and energy dissipation can synthetically reply the state of existed defects inner raw magnetite ore and the damage process during loading period. The fast releasing of elastic energy and the work done by the press machine commonly make raw magnetite ore thoroughly broken after peak stress. Magnetite ore fragments have statistical self-similarity and size threshold of fractal characteristics under uniaxial squeezing crushing. The larger ratio of releasable elastic energy and dissipation energy and the faster energy change rate is the better fractal properties and crushing quality magnetite ore has under uniaxial crushing.

  6. Fractal aspects of the flow and shear behaviour of free-flowable particle size fractions of pharmaceutical directly compressible excipient sorbitol.

    PubMed

    Hurychová, Hana; Lebedová, Václava; Šklubalová, Zdenka; Dzámová, Pavlína; Svěrák, Tomáš; Stoniš, Jan

    Flowability of powder excipients is directly influenced by their size and shape although the granulometric influence of the flow and shear behaviour of particulate matter is not studied frequently. In this work, the influence of particle size on the mass flow rate through the orifice of a conical hopper, and the cohesion and flow function was studied for four free-flowable size fractions of sorbitol for direct compression in the range of 0.080-0.400 mm. The particles were granulometricaly characterized using an optical microscopy; a boundary fractal dimension of 1.066 was estimated for regular sorbitol particles. In the particle size range studied, a non-linear relationship between the mean particle size and the mass flow rate Q10 (g/s) was detected having amaximum at the 0.245mm fraction. The best flow properties of this fraction were verified with aJenike shear tester due to the highest value of flow function and the lowest value of the cohesion. The results of this work show the importance of the right choice of the excipient particle size to achieve the best flow behaviour of particulate material.Key words: flowability size fraction sorbitol for direct compaction Jenike shear tester fractal dimension.

  7. Fractals, Fuzzy Sets And Image Representation

    NASA Astrophysics Data System (ADS)

    Dodds, D. R.

    1988-10-01

    This paper addresses some uses of fractals, fuzzy sets and image representation as it pertains to robotic grip planning and autonomous vehicle navigation AVN. The robot/vehicle is assumed to be equipped with multimodal sensors including ultrashort pulse imaging laser rangefinder. With a temporal resolution of 50 femtoseconds a time of flight laser rangefinder can resolve distances within approximately half an inch or 1.25 centimeters. (Fujimoto88)

  8. Fractal Dimensions of Umbral and Penumbral Regions of Sunspots

    NASA Astrophysics Data System (ADS)

    Rajkumar, B.; Haque, S.; Hrudey, W.

    2017-11-01

    The images of sunspots in 16 active regions taken at the University College of the Cayman Islands (UCCI) Observatory on Grand Cayman during June-November 2015 were used to determine their fractal dimensions using the perimeter-area method for the umbral and the penumbral region. Scale-free fractal dimensions of 2.09 ±0.42 and 1.72 ±0.4 were found, respectively. This value was higher than the value determined by Chumak and Chumak ( Astron. Astrophys. Trans. 10, 329, 1996), who used a similar method, but only for the penumbral region of their sample set. The umbral and penumbral fractal dimensions for the specific sunspots are positively correlated with r = 0.58. Furthermore, a similar time-series analysis was performed on eight images of AR 12403, from 21 August 2015 to 28 August 2015 taken from the Debrecen Photoheliographic Data (DPD). The correlation is r = 0.623 between the umbral and penumbral fractal dimensions in the time series, indicating that the complexity in morphology indicated by the fractal dimension between the umbra and penumbra followed each other in time as well.

  9. New methodology for evaluating osteoclastic activity induced by orthodontic load

    PubMed Central

    ARAÚJO, Adriele Silveira; FERNANDES, Alline Birra Nolasco; MACIEL, José Vinicius Bolognesi; NETTO, Juliana de Noronha Santos; BOLOGNESE, Ana Maria

    2015-01-01

    Orthodontic tooth movement (OTM) is a dynamic process of bone modeling involving osteoclast-driven resorption on the compression side. Consequently, to estimate the influence of various situations on tooth movement, experimental studies need to analyze this cell. Objectives The aim of this study was to test and validate a new method for evaluating osteoclastic activity stimulated by mechanical loading based on the fractal analysis of the periodontal ligament (PDL)-bone interface. Material and Methods The mandibular right first molars of 14 rabbits were tipped mesially by a coil spring exerting a constant force of 85 cN. To evaluate the actual influence of osteoclasts on fractal dimension of bone surface, alendronate (3 mg/Kg) was injected weekly in seven of those rabbits. After 21 days, the animals were killed and their jaws were processed for histological evaluation. Osteoclast counts and fractal analysis (by the box counting method) of the PDL-bone interface were performed in histological sections of the right and left sides of the mandible. Results An increase in the number of osteoclasts and in fractal dimension after OTM only happened when alendronate was not administered. Strong correlation was found between the number of osteoclasts and fractal dimension. Conclusions Our results suggest that osteoclastic activity leads to an increase in bone surface irregularity, which can be quantified by its fractal dimension. This makes fractal analysis by the box counting method a potential tool for the assessment of osteoclastic activity on bone surfaces in microscopic examination. PMID:25760264

  10. Experimental criteria for the determination of fractal parameters of premixed turbulent flames

    NASA Astrophysics Data System (ADS)

    Shepherd, I. G.; Cheng, Robert K.; Talbot, L.

    1992-10-01

    The influence of spatial resolution, digitization noise, the number of records used for averaging, and the method of analysis on the determination of the fractal parameters of a high Damköhler number, methane/air, premixed, turbulent stagnation-point flame are investigated in this paper. The flow exit velocity was 5 m/s and the turbulent Reynolds number was 70 based on a integral scale of 3 mm and a turbulent intensity of 7%. The light source was a copper vapor laser which delivered 20 nsecs, 5 mJ pulses at 4 kHz and the tomographic cross-sections of the flame were recorded by a high speed movie camera. The spatial resolution of the images is 155 × 121 μm/pixel with a field of view of 50 × 65 mm. The stepping caliper technique for obtaining the fractal parameters is found to give the clearest indication of the cutoffs and the effects of noise. It is necessary to ensemble average the results from more than 25 statistically independent images to reduce sufficiently the scatter in the fractal parameters. The effects of reduced spatial resolution on fractal plots are estimated by artificial degradation of the resolution of the digitized flame boundaries. The effect of pixel resolution, an apparent increase in flame length below the inner scale rolloff, appears in the fractal plots when the measurent scale is less than approximately twice the pixel resolution. Although a clearer determination of fractal parameters is obtained by local averaging of the flame boundaries which removes digitization noise, at low spatial resolution this technique can reduce the fractal dimension. The degree of fractal isotropy of the flame surface can have a significant effect on the estimation of the flame surface area and hence burning rate from two-dimensional images. To estimate this isotropy a determination of the outer cutoff is required and three-dimensional measurements are probably also necessary.

  11. Fractality of pulsatile flow in speckle images

    NASA Astrophysics Data System (ADS)

    Nemati, M.; Kenjeres, S.; Urbach, H. P.; Bhattacharya, N.

    2016-05-01

    The scattering of coherent light from a system with underlying flow can be used to yield essential information about dynamics of the process. In the case of pulsatile flow, there is a rapid change in the properties of the speckle images. This can be studied using the standard laser speckle contrast and also the fractality of images. In this paper, we report the results of experiments performed to study pulsatile flow with speckle images, under different experimental configurations to verify the robustness of the techniques for applications. In order to study flow under various levels of complexity, the measurements were done for three in-vitro phantoms and two in-vivo situations. The pumping mechanisms were varied ranging from mechanical pumps to the human heart for the in vivo case. The speckle images were analyzed using the techniques of fractal dimension and speckle contrast analysis. The results of these techniques for the various experimental scenarios were compared. The fractal dimension is a more sensitive measure to capture the complexity of the signal though it was observed that it is also extremely sensitive to the properties of the scattering medium and cannot recover the signal for thicker diffusers in comparison to speckle contrast.

  12. Fractal analysis of seafloor textures for target detection in synthetic aperture sonar imagery

    NASA Astrophysics Data System (ADS)

    Nabelek, T.; Keller, J.; Galusha, A.; Zare, A.

    2018-04-01

    Fractal analysis of an image is a mathematical approach to generate surface related features from an image or image tile that can be applied to image segmentation and to object recognition. In undersea target countermeasures, the targets of interest can appear as anomalies in a variety of contexts, visually different textures on the seafloor. In this paper, we evaluate the use of fractal dimension as a primary feature and related characteristics as secondary features to be extracted from synthetic aperture sonar (SAS) imagery for the purpose of target detection. We develop three separate methods for computing fractal dimension. Tiles with targets are compared to others from the same background textures without targets. The different fractal dimension feature methods are tested with respect to how well they can be used to detect targets vs. false alarms within the same contexts. These features are evaluated for utility using a set of image tiles extracted from a SAS data set generated by the U.S. Navy in conjunction with the Office of Naval Research. We find that all three methods perform well in the classification task, with a fractional Brownian motion model performing the best among the individual methods. We also find that the secondary features are just as useful, if not more so, in classifying false alarms vs. targets. The best classification accuracy overall, in our experimentation, is found when the features from all three methods are combined into a single feature vector.

  13. [Recent progress of research and applications of fractal and its theories in medicine].

    PubMed

    Cai, Congbo; Wang, Ping

    2014-10-01

    Fractal, a mathematics concept, is used to describe an image of self-similarity and scale invariance. Some organisms have been discovered with the fractal characteristics, such as cerebral cortex surface, retinal vessel structure, cardiovascular network, and trabecular bone, etc. It has been preliminarily confirmed that the three-dimensional structure of cells cultured in vitro could be significantly enhanced by bionic fractal surface. Moreover, fractal theory in clinical research will help early diagnosis and treatment of diseases, reducing the patient's pain and suffering. The development process of diseases in the human body can be expressed by the fractal theories parameter. It is of considerable significance to retrospectively review the preparation and application of fractal surface and its diagnostic value in medicine. This paper gives an application of fractal and its theories in the medical science, based on the research achievements in our laboratory.

  14. MORPH-I (Ver 1.0) a software package for the analysis of scanning electron micrograph (binary formatted) images for the assessment of the fractal dimension of enclosed pore surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert

    1998-01-01

    MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.

  15. The Calculation of Fractal Dimension in the Presence of Non-Fractal Clutter

    NASA Technical Reports Server (NTRS)

    Herren, Kenneth A.; Gregory, Don A.

    1999-01-01

    The area of information processing has grown dramatically over the last 50 years. In the areas of image processing and information storage the technology requirements have far outpaced the ability of the community to meet demands. The need for faster recognition algorithms and more efficient storage of large quantities of data has forced the user to accept less than lossless retrieval of that data for analysis. In addition to clutter that is not the object of interest in the data set, often the throughput requirements forces the user to accept "noisy" data and to tolerate the clutter inherent in that data. It has been shown that some of this clutter, both the intentional clutter (clouds, trees, etc) as well as the noise introduced on the data by processing requirements can be modeled as fractal or fractal-like. Traditional methods using Fourier deconvolution on these sources of noise in frequency space leads to loss of signal and can, in many cases, completely eliminate the target of interest. The parameters that characterize fractal-like noise (predominately the fractal dimension) have been investigated and a technique to reduce or eliminate noise from real scenes has been developed. Examples of clutter reduced images are presented.

  16. Self-Similarity of Plasmon Edge Modes on Koch Fractal Antennas.

    PubMed

    Bellido, Edson P; Bernasconi, Gabriel D; Rossouw, David; Butet, Jérémy; Martin, Olivier J F; Botton, Gianluigi A

    2017-11-28

    We investigate the plasmonic behavior of Koch snowflake fractal geometries and their possible application as broadband optical antennas. Lithographically defined planar silver Koch fractal antennas were fabricated and characterized with high spatial and spectral resolution using electron energy loss spectroscopy. The experimental data are supported by numerical calculations carried out with a surface integral equation method. Multiple surface plasmon edge modes supported by the fractal structures have been imaged and analyzed. Furthermore, by isolating and reproducing self-similar features in long silver strip antennas, the edge modes present in the Koch snowflake fractals are identified. We demonstrate that the fractal response can be obtained by the sum of basic self-similar segments called characteristic edge units. Interestingly, the plasmon edge modes follow a fractal-scaling rule that depends on these self-similar segments formed in the structure after a fractal iteration. As the size of a fractal structure is reduced, coupling of the modes in the characteristic edge units becomes relevant, and the symmetry of the fractal affects the formation of hybrid modes. This analysis can be utilized not only to understand the edge modes in other planar structures but also in the design and fabrication of fractal structures for nanophotonic applications.

  17. Transition of temporal scaling behavior in percolation assisted shear-branching structure during plastic deformation

    DOE PAGES

    Ren, Jingli; Chen, Cun; Wang, Gang; ...

    2017-03-22

    This study explores the temporal scaling behavior induced shear-branching structure in response to variant temperatures and strain rates during plastic deformation of Zr-based bulk metallic glass (BMG). The data analysis based on the compression tests suggests that there are two states of shear-branching structures: the fractal structure with a long-range order at an intermediate temperature of 223 K and a larger strain rate of 2.5 × 10 –2 s –1; the disordered structure dominated at other temperature and strain rate. It can be deduced from the percolation theory that the compressive ductility, ec, can reach the maximum value at themore » intermediate temperature. Furthermore, a dynamical model involving temperature is given for depicting the shear-sliding process, reflecting the plastic deformation has fractal structure at the temperature of 223 K and strain rate of 2.5 × 10 –2 s –1.« less

  18. Fractal analysis of the susceptibility weighted imaging patterns in malignant brain tumors during antiangiogenic treatment: technical report on four cases serially imaged by 7 T magnetic resonance during a period of four weeks.

    PubMed

    Di Ieva, Antonio; Matula, Christian; Grizzi, Fabio; Grabner, Günther; Trattnig, Siegfried; Tschabitscher, Manfred

    2012-01-01

    The need for new and objective indexes for the neuroradiologic follow-up of brain tumors and for monitoring the effects of antiangiogenic strategies in vivo led us to perform a technical study on four patients who received computerized analysis of tumor-associated vasculature with ultra-high-field (7 T) magnetic resonance imaging (MRI). The image analysis involved the application of susceptibility weighted imaging (SWI) to evaluate vascular structures. Four patients affected by recurrent malignant brain tumors were enrolled in the present study. After the first 7-T SWI MRI procedure, the patients underwent antiangiogenic treatment with bevacizumab. The imaging was repeated every 2 weeks for a period of 4 weeks. The SWI patterns visualized in the three MRI temporal sequences were analyzed by means of a computer-aided fractal-based method to objectively quantify their geometric complexity. In two clinically deteriorating patients we found an increase of the geometric complexity of the space-filling properties of the SWI patterns over time despite the antiangiogenic treatment. In one patient, who showed improvement with the therapy, the fractal dimension of the intratumoral structure decreased, whereas in the fourth patient, no differences were found. The qualitative changes of the intratumoral SWI patterns during a period of 4 weeks were quantified with the fractal dimension. Because SWI patterns are also related to the presence of vascular structures, the quantification of their space-filling properties with fractal dimension seemed to be a valid tool for the in vivo neuroradiologic follow-up of brain tumors. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. The museum of unnatural form: a visual and tactile experience of fractals.

    PubMed

    Della-Bosca, D; Taylor, R P

    2009-01-01

    A remarkable computer technology is revolutionizing the world of design, allowing intricate patterns to be created with mathematical precision and then 'printed' as physical objects. Contour crafting is a fabrication process capable of assembling physical structures the sizes of houses, firing the imagination of a new generation of architects and artists (Khoshnevisat, 2008). Daniel Della-Bosca has jumped at this opportunity to create the 'Museum of Unnatural Form' at Griffith University. Della-Bosca's museum is populated with fractals sculptures - his own versions of nature's complex objects - that have been printed with the new technology. His sculptures bridge the historical divide in fractal studies between the abstract images of mathematics and the physical objects of Nature (Mandelbrot, 1982). Four of his fractal images will be featured on the cover of NDPLS in 2009.

  20. Understanding soft glassy materials using an energy landscape approach

    NASA Astrophysics Data System (ADS)

    Hwang, Hyun Joo; Riggleman, Robert A.; Crocker, John C.

    2016-09-01

    Many seemingly different soft materials--such as soap foams, mayonnaise, toothpaste and living cells--display strikingly similar viscoelastic behaviour. A fundamental physical understanding of such soft glassy rheology and how it can manifest in such diverse materials, however, remains unknown. Here, by using a model soap foam consisting of compressible spherical bubbles, whose sizes slowly evolve and whose collective motion is simply dictated by energy minimization, we study the foam's dynamics as it corresponds to downhill motion on an energy landscape function spanning a high-dimensional configuration space. We find that these downhill paths, when viewed in this configuration space, are, surprisingly, fractal. The complex behaviour of our model, including power-law rheology and non-diffusive bubble motion and avalanches, stems directly from the fractal dimension and energy function of these paths. Our results suggest that ubiquitous soft glassy rheology may be a consequence of emergent fractal geometry in the energy landscapes of many complex fluids.

  1. Consideration of the method of image diagnosis with respect to frontal lobe atrophy

    NASA Astrophysics Data System (ADS)

    Sato, K.; Sugawara, K.; Narita, Y.; Namura, I.

    1996-12-01

    Proposes a segmentation method for a quantitative image diagnosis as a means of realizing an objective diagnosis of the frontal lobe atrophy. From the data obtained on the grade of membership, the fractal dimensions of the cerebral tissue [cerebral spinal fluid (CSF), gray matter, and white matter] and the contours are estimated. The mutual relationship between the degree of atrophy and the fractal dimension has been analyzed based on the estimated fractal dimensions. Using a sample of 42 male and female cases, ranging In age from 50's to 70's, it has been concluded that the frontal lobe atrophy can be quantified by regarding it as an expansion of CSF region on the magnetic resonance imaging (MRI) of the brain. Furthermore, when the process of frontal lobe atrophy is separated into early and advanced stages, the volumetric change of CSF and white matter in frontal lobe displays meaningful differences between the two stages, demonstrating that the fractal dimension of CSF rises with the progress of atrophy. Moreover, an interpolation method for three-dimensional (3-D) shape reconstruction of the region of diagnostic interest is proposed and 3-D shape visualization, with respect to the degree and form of atrophy, is performed on the basis of the estimated fractal dimension of the segmented cerebral tissue.

  2. SU-D-BRA-04: Fractal Dimension Analysis of Edge-Detected Rectal Cancer CTs for Outcome Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Wang, J; Hu, W

    2015-06-15

    Purpose: To extract the fractal dimension features from edge-detected rectal cancer CTs, and to examine the predictability of fractal dimensions to outcomes of primary rectal cancer patients. Methods: Ninety-seven rectal cancer patients treated with neo-adjuvant chemoradiation were enrolled in this study. CT images were obtained before chemoradiotherapy. The primary lesions of the rectal cancer were delineated by experienced radiation oncologists. These images were extracted and filtered by six different Laplacian of Gaussian (LoG) filters with different filter values (0.5–3.0: from fine to coarse) to achieve primary lesions in different anatomical scales. Edges of the original images were found at zero-crossingsmore » of the filtered images. Three different fractal dimensions (box-counting dimension, Minkowski dimension, mass dimension) were calculated upon the image slice with the largest cross-section of the primary lesion. The significance of these fractal dimensions in survival, recurrence and metastasis were examined by Student’s t-test. Results: For a follow-up time of two years, 18 of 97 patients had experienced recurrence, 24 had metastasis, and 18 were dead. Minkowski dimensions under large filter values (2.0, 2.5, 3.0) were significantly larger (p=0.014, 0.006, 0.015) in patients with recurrence than those without. For metastasis, only box-counting dimensions under a single filter value (2.5) showed differences (p=0.016) between patients with and without. For overall survival, box-counting dimensions (filter values = 0.5, 1.0, 1.5), Minkowski dimensions (filter values = 0.5, 1.5, 2.0, 2,5) and mass dimensions (filter values = 1.5, 2.0) were all significant (p<0.05). Conclusion: It is feasible to extract shape information by edge detection and fractal dimensions analysis in neo-adjuvant rectal cancer patients. This information can be used to prognosis prediction.« less

  3. Fractal characteristic in the wearing of cutting tool

    NASA Astrophysics Data System (ADS)

    Mei, Anhua; Wang, Jinghui

    1995-11-01

    This paper studies the cutting tool wear with fractal geometry. The wearing image of the flank has been collected by machine vision which consists of CCD camera and personal computer. After being processed by means of preserving smoothing, binary making and edge extracting, the clear boundary enclosing the worn area has been obtained. The fractal dimension of the worn surface is calculated by the methods called `Slit Island' and `Profile'. The experiments and calciating give the conclusion that the worn surface is enclosed by a irregular boundary curve with some fractal dimension and characteristics of self-similarity. Furthermore, the relation between the cutting velocity and the fractal dimension of the worn region has been submitted. This paper presents a series of methods for processing and analyzing the fractal information in the blank wear, which can be applied to research the projective relation between the fractal structure and the wear state, and establish the fractal model of the cutting tool wear.

  4. A tale of two fractals: The Hofstadter butterfly and the integral Apollonian gaskets

    NASA Astrophysics Data System (ADS)

    Satija, Indubala I.

    2016-11-01

    This paper unveils a mapping between a quantum fractal that describes a physical phenomena, and an abstract geometrical fractal. The quantum fractal is the Hofstadter butterfly discovered in 1976 in an iconic condensed matter problem of electrons moving in a two-dimensional lattice in a transverse magnetic field. The geometric fractal is the integer Apollonian gasket characterized in terms of a 300 BC problem of mutually tangent circles. Both of these fractals are made up of integers. In the Hofstadter butterfly, these integers encode the topological quantum numbers of quantum Hall conductivity. In the Apollonian gaskets an infinite number of mutually tangent circles are nested inside each other, where each circle has integer curvature. The mapping between these two fractals reveals a hidden D3 symmetry embedded in the kaleidoscopic images that describe the asymptotic scaling properties of the butterfly. This paper also serves as a mini review of these fractals, emphasizing their hierarchical aspects in terms of Farey fractions.

  5. Contour fractal analysis of grains

    NASA Astrophysics Data System (ADS)

    Guida, Giulia; Casini, Francesca; Viggiani, Giulia MB

    2017-06-01

    Fractal analysis has been shown to be useful in image processing to characterise the shape and the grey-scale complexity in different applications spanning from electronic to medical engineering (e.g. [1]). Fractal analysis consists of several methods to assign a dimension and other fractal characteristics to a dataset describing geometric objects. Limited studies have been conducted on the application of fractal analysis to the classification of the shape characteristics of soil grains. The main objective of the work described in this paper is to obtain, from the results of systematic fractal analysis of artificial simple shapes, the characterization of the particle morphology at different scales. The long term objective of the research is to link the microscopic features of granular media with the mechanical behaviour observed in the laboratory and in situ.

  6. Digital video technologies and their network requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. P. Tsang; H. Y. Chen; J. M. Brandt

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the variousmore » coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.« less

  7. Factors Affecting the Changes of Ice Crystal Form in Ice Cream

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Watanabe, Manabu; Suzuki, Toru

    In this study, the shape of ice crystals in ice cream was quantitatively evaluated by introducing fractal analysis. A small droplet of commercial ice cream mix was quickly cooled to about -30°C on the cold stage of microscope. Subsequently, it was heated to -5°C or -10°C and then held for various holding time. Based on the captured images at each holding time, the cross-sectional area and the length of circumference for each ice crystal were measured to calculate fractal dimension using image analysis software. The results showed that the ice crystals were categorized into two groups, e.g. simple-shape and complicated-shape, according to their fractal dimensions. The fractal dimension of ice crystals became lower with increasing holding time and holding temperature. It was also indicated that the growing rate of complicated-shape ice crystals was relatively higher because of aggregation.

  8. Functional slit lamp biomicroscopy for imaging bulbar conjunctival microvasculature in contact lens wearers

    PubMed Central

    Jiang, Hong; Zhong, Jianguang; DeBuc, Delia Cabrera; Tao, Aizhu; Xu, Zhe; Lam, Byron L.; Liu, Che; Wang, Jianhua

    2014-01-01

    Purpose To develop, test and validate functional slit lamp biomicroscopy (FSLB) for generating non-invasive bulbar conjunctival microvascular perfusion maps (nMPMs) and assessing morphometry and hemodyanmics. Methods FSLB was adapted from a traditional slit-lamp microscope by attaching a digital camera to image the bulbar conjunctiva to create nMPMs and measure venular blood flow hemodyanmics. High definition images with a large field of view were obtained on the temporal bulbar conjunctiva for creating nMPMs. A high imaging rate of 60 frame per second and a ~210× high magnification were achieved using the camera inherited high speed setting and movie crop function, for imaging hemodyanmics. Custom software was developed to segment bulbar conjunctival nMPMs for further fractal analysis and quantitatively measure blood vessel diameter, blood flow velocity and flow rate. Six human subjects were imaged before and after 6 hours of wearing contact lenses. Monofractal and multifractal analyses were performed to quantify fractality of the nMPMs. Results The mean bulbar conjunctival vessel diameter was 18.8 ± 2.7 μm at baseline and increased to 19.6 ± 2.4 μm after 6 hours of lens wear (P = 0.020). The blood flow velocity was increased from 0.60 ± 0.12 mm/s to 0.88 ± 0.21 mm/s (P = 0.001). The blood flow rate was also increased from 129.8 ± 59.9 pl/s to 207.2 ± 81.3 pl/s (P = 0.001). Bulbar conjunctival nMPMs showed the intricate details of the bulbar conjunctival microvascular network. At baseline, fractal dimension was 1.63 ± 0.05 and 1.71 ± 0.03 analyzed by monofractal and multifractal analysis, respectively. Significant increases in fractal dimensions were found after 6 hours of lens wear (P < 0.05). Conclusions Microvascular network’s fractality, morphometry and hemodyanmics of the human bulbar conjunctiva can be measured easily and reliably using FSLB. The alternations of the fractal dimensions, morphometry and hemodyanmics during contact lens wear may indicate ocular microvascular responses to contact lens wear. PMID:24444784

  9. Investigating the effect of suspensions nanostructure on the thermophysical properties of nanofluids

    NASA Astrophysics Data System (ADS)

    Tesfai, Waka; Singh, Pawan K.; Masharqa, Salim J. S.; Souier, Tewfik; Chiesa, Matteo; Shatilla, Youssef

    2012-12-01

    The effect of fractal dimensions and Feret diameter of aggregated nanoparticle on predicting the thermophysical properties of nanofluids is demonstrated. The fractal dimensions and Feret diameter distributions of particle agglomerates are quantified from scanning electron and probe microscope imaging of yttria nanofluids. The results are compared with the fractal dimensions calculated by fitting the rheological properties of yttria nanofluids against the modified Krieger-Dougherty model. Nanofluids of less than 1 vol. % particle loading are found to have fractal dimensions of below 1.8, which is typical for diffusion controlled cluster formation. By contrast, an increase in the particle loading increases the fractal dimension to 2.0-2.2. The fractal dimensions obtained from both methods are employed to predict the thermal conductivity of the nanofluids using the modified Maxwell-Garnet (M-G) model. The prediction from rheology is found inadequate and might lead up to 8% error in thermal conductivity for an improper choice of aspect ratio. Nevertheless, the prediction of the modified M-G model from the imaging is found to agree well with the experimentally observed effective thermal conductivity of the nanofluids. In addition, this study opens a new window on the study of aggregate kinetics, which is critical in tuning the properties of multiphase systems.

  10. Wireless teleradiology and fax using cellular phones and notebook PCs for instant access to consultants.

    PubMed

    Yamamoto, L G

    1995-03-01

    The feasibility of wireless portable teleradiology and facsimile (fax) transmission using a pocket cellular phone and a notebook computer to obtain immediate access to consultants at any location was studied. Modems specially designed for data and fax communication via cellular systems were employed to provide a data communication interface between the cellular phone and the notebook computer. Computed tomography (CT) scans, X-rays, and electrocardiograms (ECGs) were transmitted to a wireless unit to measure performance characteristics. Data transmission rates ranged from 520 to 1100 bytes per second. Typical image transmission times ranged from 1 to 10 minutes; however, using joint photographic experts group or fractal image compression methods would shorten typical transmission times to less than one minute. This study showed that wireless teleradiology and fax over cellular communication systems are feasible with current technology. Routine immediate cellular faxing of ECGs to cardiologists may expedite thrombolytic therapy decisions in questionable cases. Routine immediate teleradiology of CT scans may reduce operation room preparation times in severe head trauma.

  11. The fractal dimension of cell membrane correlates with its capacitance: A new fractal single-shell model

    PubMed Central

    Wang, Xujing; Becker, Frederick F.; Gascoyne, Peter R. C.

    2010-01-01

    The scale-invariant property of the cytoplasmic membrane of biological cells is examined by applying the Minkowski–Bouligand method to digitized scanning electron microscopy images of the cell surface. The membrane is found to exhibit fractal behavior, and the derived fractal dimension gives a good description of its morphological complexity. Furthermore, we found that this fractal dimension correlates well with the specific membrane dielectric capacitance derived from the electrorotation measurements. Based on these findings, we propose a new fractal single-shell model to describe the dielectrics of mammalian cells, and compare it with the conventional single-shell model (SSM). We found that while both models fit with experimental data well, the new model is able to eliminate the discrepancy between the measured dielectric property of cells and that predicted by the SSM. PMID:21198103

  12. Fractal-based wideband invisibility cloak

    NASA Astrophysics Data System (ADS)

    Cohen, Nathan; Okoro, Obinna; Earle, Dan; Salkind, Phil; Unger, Barry; Yen, Sean; McHugh, Daniel; Polterzycki, Stefan; Shelman-Cohen, A. J.

    2015-03-01

    A wideband invisibility cloak (IC) at microwave frequencies is described. Using fractal resonators in closely spaced (sub wavelength) arrays as a minimal number of cylindrical layers (rings), the IC demonstrates that it is physically possible to attain a `see through' cloaking device with: (a) wideband coverage; (b) simple and attainable fabrication; (c) high fidelity emulation of the free path; (d) minimal side scattering; (d) a near absence of shadowing in the scattering. Although not a practical device, this fractal-enabled technology demonstrator opens up new opportunities for diverted-image (DI) technology and use of fractals in wideband optical, infrared, and microwave applications.

  13. Fractal analysis of MRI data for the characterization of patients with schizophrenia and bipolar disorder.

    PubMed

    Squarcina, Letizia; De Luca, Alberto; Bellani, Marcella; Brambilla, Paolo; Turkheimer, Federico E; Bertoldo, Alessandra

    2015-02-21

    Fractal geometry can be used to analyze shape and patterns in brain images. With this study we use fractals to analyze T1 data of patients affected by schizophrenia or bipolar disorder, with the aim of distinguishing between healthy and pathological brains using the complexity of brain structure, in particular of grey matter, as a marker of disease. 39 healthy volunteers, 25 subjects affected by schizophrenia and 11 patients affected by bipolar disorder underwent an MRI session. We evaluated fractal dimension of the brain cortex and its substructures, calculated with an algorithm based on the box-count algorithm. We modified this algorithm, with the aim of avoiding the segmentation processing step and using all the information stored in the image grey levels. Moreover, to increase sensitivity to local structural changes, we computed a value of fractal dimension for each slice of the brain or of the particular structure. To have reference values in comparing healthy subjects with patients, we built a template by averaging fractal dimension values of the healthy volunteers data. Standard deviation was evaluated and used to create a confidence interval. We also performed a slice by slice t-test to assess the difference at slice level between the three groups. Consistent average fractal dimension values were found across all the structures in healthy controls, while in the pathological groups we found consistent differences, indicating a change in brain and structures complexity induced by these disorders.

  14. Fractal analysis of MRI data for the characterization of patients with schizophrenia and bipolar disorder

    NASA Astrophysics Data System (ADS)

    Squarcina, Letizia; De Luca, Alberto; Bellani, Marcella; Brambilla, Paolo; Turkheimer, Federico E.; Bertoldo, Alessandra

    2015-02-01

    Fractal geometry can be used to analyze shape and patterns in brain images. With this study we use fractals to analyze T1 data of patients affected by schizophrenia or bipolar disorder, with the aim of distinguishing between healthy and pathological brains using the complexity of brain structure, in particular of grey matter, as a marker of disease. 39 healthy volunteers, 25 subjects affected by schizophrenia and 11 patients affected by bipolar disorder underwent an MRI session. We evaluated fractal dimension of the brain cortex and its substructures, calculated with an algorithm based on the box-count algorithm. We modified this algorithm, with the aim of avoiding the segmentation processing step and using all the information stored in the image grey levels. Moreover, to increase sensitivity to local structural changes, we computed a value of fractal dimension for each slice of the brain or of the particular structure. To have reference values in comparing healthy subjects with patients, we built a template by averaging fractal dimension values of the healthy volunteers data. Standard deviation was evaluated and used to create a confidence interval. We also performed a slice by slice t-test to assess the difference at slice level between the three groups. Consistent average fractal dimension values were found across all the structures in healthy controls, while in the pathological groups we found consistent differences, indicating a change in brain and structures complexity induced by these disorders.

  15. The fractal heart — embracing mathematics in the cardiology clinic

    PubMed Central

    Captur, Gabriella; Karperien, Audrey L.; Hughes, Alun D.; Francis, Darrel P.; Moon, James C.

    2017-01-01

    For clinicians grappling with quantifying the complex spatial and temporal patterns of cardiac structure and function (such as myocardial trabeculae, coronary microvascular anatomy, tissue perfusion, myocyte histology, electrical conduction, heart rate, and blood-pressure variability), fractal analysis is a powerful, but still underused, mathematical tool. In this Perspectives article, we explain some fundamental principles of fractal geometry and place it in a familiar medical setting. We summarize studies in the cardiovascular sciences in which fractal methods have successfully been used to investigate disease mechanisms, and suggest potential future clinical roles in cardiac imaging and time series measurements. We believe that clinical researchers can deploy innovative fractal solutions to common cardiac problems that might ultimately translate into advancements for patient care. PMID:27708281

  16. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  17. Fractal dimension of trabecular bone projection texture is related to three-dimensional microarchitecture.

    PubMed

    Pothuaud, L; Benhamou, C L; Porion, P; Lespessailles, E; Harba, R; Levitz, P

    2000-04-01

    The purpose of this work was to understand how fractal dimension of two-dimensional (2D) trabecular bone projection images could be related to three-dimensional (3D) trabecular bone properties such as porosity or connectivity. Two alteration processes were applied to trabecular bone images obtained by magnetic resonance imaging: a trabeculae dilation process and a trabeculae removal process. The trabeculae dilation process was applied from the 3D skeleton graph to the 3D initial structure with constant connectivity. The trabeculae removal process was applied from the initial structure to an altered structure having 99% of porosity, in which both porosity and connectivity were modified during this second process. Gray-level projection images of each of the altered structures were simply obtained by summation of voxels, and fractal dimension (Df) was calculated. Porosity (phi) and connectivity per unit volume (Cv) were calculated from the 3D structure. Significant relationships were found between Df, phi, and Cv. Df values increased when porosity increased (dilation and removal processes) and when connectivity decreased (only removal process). These variations were in accordance with all previous clinical studies, suggesting that fractal evaluation of trabecular bone projection has real meaning in terms of porosity and connectivity of the 3D architecture. Furthermore, there was a statistically significant linear dependence between Df and Cv when phi remained constant. Porosity is directly related to bone mineral density and fractal dimension can be easily evaluated in clinical routine. These two parameters could be associated to evaluate the connectivity of the structure.

  18. Fractal dimension and the navigational information provided by natural scenes.

    PubMed

    Shamsyeh Zahedi, Moosarreza; Zeil, Jochen

    2018-01-01

    Recent work on virtual reality navigation in humans has suggested that navigational success is inversely correlated with the fractal dimension (FD) of artificial scenes. Here we investigate the generality of this claim by analysing the relationship between the fractal dimension of natural insect navigation environments and a quantitative measure of the navigational information content of natural scenes. We show that the fractal dimension of natural scenes is in general inversely proportional to the information they provide to navigating agents on heading direction as measured by the rotational image difference function (rotIDF). The rotIDF determines the precision and accuracy with which the orientation of a reference image can be recovered or maintained and the range over which a gradient descent in image differences will find the minimum of the rotIDF, that is the reference orientation. However, scenes with similar fractal dimension can differ significantly in the depth of the rotIDF, because FD does not discriminate between the orientations of edges, while the rotIDF is mainly affected by edge orientation parallel to the axis of rotation. We present a new equation for the rotIDF relating navigational information to quantifiable image properties such as contrast to show (1) that for any given scene the maximum value of the rotIDF (its depth) is proportional to pixel variance and (2) that FD is inversely proportional to pixel variance. This contrast dependence, together with scene differences in orientation statistics, explains why there is no strict relationship between FD and navigational information. Our experimental data and their numerical analysis corroborate these results.

  19. Surface Fractal Analysis for Estimating the Fracture Energy Absorption of Nanoparticle Reinforced Composites

    PubMed Central

    Pramanik, Brahmananda; Tadepalli, Tezeswi; Mantena, P. Raju

    2012-01-01

    In this study, the fractal dimensions of failure surfaces of vinyl ester based nanocomposites are estimated using two classical methods, Vertical Section Method (VSM) and Slit Island Method (SIM), based on the processing of 3D digital microscopic images. Self-affine fractal geometry has been observed in the experimentally obtained failure surfaces of graphite platelet reinforced nanocomposites subjected to quasi-static uniaxial tensile and low velocity punch-shear loading. Fracture energy and fracture toughness are estimated analytically from the surface fractal dimensionality. Sensitivity studies show an exponential dependency of fracture energy and fracture toughness on the fractal dimensionality. Contribution of fracture energy to the total energy absorption of these nanoparticle reinforced composites is demonstrated. For the graphite platelet reinforced nanocomposites investigated, surface fractal analysis has depicted the probable ductile or brittle fracture propagation mechanism, depending upon the rate of loading. PMID:28817017

  20. Focusing behavior of the fractal vector optical fields designed by fractal lattice growth model.

    PubMed

    Gao, Xu-Zhen; Pan, Yue; Zhao, Meng-Dan; Zhang, Guan-Lin; Zhang, Yu; Tu, Chenghou; Li, Yongnan; Wang, Hui-Tian

    2018-01-22

    We introduce a general fractal lattice growth model, significantly expanding the application scope of the fractal in the realm of optics. This model can be applied to construct various kinds of fractal "lattices" and then to achieve the design of a great diversity of fractal vector optical fields (F-VOFs) combinating with various "bases". We also experimentally generate the F-VOFs and explore their universal focusing behaviors. Multiple focal spots can be flexibly enginnered, and the optical tweezers experiment validates the simulated tight focusing fields, which means that this model allows the diversity of the focal patterns to flexibly trap and manipulate micrometer-sized particles. Furthermore, the recovery performance of the F-VOFs is also studied when the input fields and spatial frequency spectrum are obstructed, and the results confirm the robustness of the F-VOFs in both focusing and imaging processes, which is very useful in information transmission.

  1. The importance of robust error control in data compression applications

    NASA Technical Reports Server (NTRS)

    Woolley, S. I.

    1993-01-01

    Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.

  2. A brief discussion about image quality and SEM methods for quantitative fractography of polymer composites.

    PubMed

    Hein, L R O; Campos, K A; Caltabiano, P C R O; Kostov, K G

    2013-01-01

    The methodology for fracture analysis of polymeric composites with scanning electron microscopes (SEM) is still under discussion. Many authors prefer to use sputter coating with a conductive material instead of applying low-voltage (LV) or variable-pressure (VP) methods, which preserves the original surfaces. The present work examines the effects of sputter coating with 25 nm of gold on the topography of carbon-epoxy composites fracture surfaces, using an atomic force microscope. Also, the influence of SEM imaging parameters on fractal measurements is evaluated for the VP-SEM and LV-SEM methods. It was observed that topographic measurements were not significantly affected by the gold coating at tested scale. Moreover, changes on SEM setup leads to nonlinear outcome on texture parameters, such as fractal dimension and entropy values. For VP-SEM or LV-SEM, fractal dimension and entropy values did not present any evident relation with image quality parameters, but the resolution must be optimized with imaging setup, accompanied by charge neutralization. © Wiley Periodicals, Inc.

  3. A Complex Story: Universal Preference vs. Individual Differences Shaping Aesthetic Response to Fractals Patterns.

    PubMed

    Street, Nichola; Forsythe, Alexandra M; Reilly, Ronan; Taylor, Richard; Helmy, Mai S

    2016-01-01

    Fractal patterns offer one way to represent the rough complexity of the natural world. Whilst they dominate many of our visual experiences in nature, little large-scale perceptual research has been done to explore how we respond aesthetically to these patterns. Previous research (Taylor et al., 2011) suggests that the fractal patterns with mid-range fractal dimensions (FDs) have universal aesthetic appeal. Perceptual and aesthetic responses to visual complexity have been more varied with findings suggesting both linear (Forsythe et al., 2011) and curvilinear (Berlyne, 1970) relationships. Individual differences have been found to account for many of the differences we see in aesthetic responses but some, such as culture, have received little attention within the fractal and complexity research fields. This two-study article aims to test preference responses to FD and visual complexity, using a large cohort (N = 443) of participants from around the world to allow universality claims to be tested. It explores the extent to which age, culture and gender can predict our preferences for fractally complex patterns. Following exploratory analysis that found strong correlations between FD and visual complexity, a series of linear mixed-effect models were implemented to explore if each of the individual variables could predict preference. The first tested a linear complexity model (likelihood of selecting the more complex image from the pair of images) and the second a mid-range FD model (likelihood of selecting an image within mid-range). Results show that individual differences can reliably predict preferences for complexity across culture, gender and age. However, in fitting with current findings the mid-range models show greater consistency in preference not mediated by gender, age or culture. This article supports the established theory that the mid-range fractal patterns appear to be a universal construct underlying preference but also highlights the fragility of universal claims by demonstrating individual differences in preference for the interrelated concept of visual complexity. This highlights a current stalemate in the field of empirical aesthetics.

  4. Small angle x-ray scattering study on the conformation of polystyrene in toluene during adding anti-solvent CO2

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Chen, Dong-Feng; Wang, Hong-Li; Chen, Na; Li, Dan; Han, Bu-Xing; Rong, Li-Xia; Zhao, Hui; Wang, Jun; Dong, Bao-Zhong

    2002-10-01

    The conformation of polystyrene in the anti-solvent process of supercritical fluids (compressed CO2 + polystyrene + toluene) has been studied by small angle x-ray scattering with synchrotron radiation as an x-ray source. Coil-to-globule transformation of the polystyrene chain was observed with the increase of the anti-solvent CO2 pressure; i.e. polystyrene coiled at a pressure lower than the cloud point pressure (Pc) and turned into a globule with a uniform density at pressures higher than Pc. Fractal behaviour was also found in the chain contraction and the mass fractal dimension increased with increasing CO2 pressure.

  5. Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.

    2013-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to higher number of dimensions. Easy integration with other applications by using the very simple comma separated values file format for storing multi-dimensional images. Implementation of χ2 test as a criterion for deciding whether an object is fractal or not. User friendly graphical interface. Hyper-Fractal Analysis-Test on the Sierpinski hypertetrahedron 4D gasket (Df=ln(5)/ln(2)≅2.32). Running time: In a first approximation, the algorithm is linear [2]. References: [1] V. Grossu, D. Felea, C. Besliu, Al. Jipa, C.C. Bordeianu, E. Stan, T. Esanu, Computer Physics Communications, 181 (2010) 831-832. [2] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C. C. Bordeianu, D. Felea, Computer Physics Communications, 180 (2009) 1999-2001. [3] J. Ruiz de Miras, J. Navas, P. Villoslada, F.J. Esteban, Computer Methods and Programs in Biomedicine, 104 Issue 3 (2011) 452-460.

  6. Density distribution function of a self-gravitating isothermal compressible turbulent fluid in the context of molecular clouds ensembles

    NASA Astrophysics Data System (ADS)

    Donkov, Sava; Stefanov, Ivan Z.

    2018-03-01

    We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.

  7. Oxidation-Mediated Fingering in Liquid Metals

    NASA Astrophysics Data System (ADS)

    Eaker, Collin B.; Hight, David C.; O'Regan, John D.; Dickey, Michael D.; Daniels, Karen E.

    2017-10-01

    We identify and characterize a new class of fingering instabilities in liquid metals; these instabilities are unexpected due to the large interfacial tension of metals. Electrochemical oxidation lowers the effective interfacial tension of a gallium-based liquid metal alloy to values approaching zero, thereby inducing drastic shape changes, including the formation of fractals. The measured fractal dimension (D =1.3 ±0.05 ) places the instability in a different universality class than other fingering instabilities. By characterizing changes in morphology and dynamics as a function of droplet volume and applied electric potential, we identify the three main forces involved in this process: interfacial tension, gravity, and oxidative stress. Importantly, we find that electrochemical oxidation can generate compressive interfacial forces that oppose the tensile forces at a liquid interface. The surface oxide layer ultimately provides a physical and electrochemical barrier that halts the instabilities at larger positive potentials. Controlling the competition between interfacial tension and oxidative (compressive) stresses at the interface is important for the development of reconfigurable electronic, electromagnetic, and optical devices that take advantage of the metallic properties of liquid metals.

  8. Zone specific fractal dimension of retinal images as predictor of stroke incidence.

    PubMed

    Aliahmad, Behzad; Kumar, Dinesh Kant; Hao, Hao; Unnikrishnan, Premith; Che Azemin, Mohd Zulfaezal; Kawasaki, Ryo; Mitchell, Paul

    2014-01-01

    Fractal dimensions (FDs) are frequently used for summarizing the complexity of retinal vascular. However, previous techniques on this topic were not zone specific. A new methodology to measure FD of a specific zone in retinal images has been developed and tested as a marker for stroke prediction. Higuchi's fractal dimension was measured in circumferential direction (FDC) with respect to optic disk (OD), in three concentric regions between OD boundary and 1.5 OD diameter from its margin. The significance of its association with future episode of stroke event was tested using the Blue Mountain Eye Study (BMES) database and compared against spectrum fractal dimension (SFD) and box-counting (BC) dimension. Kruskal-Wallis analysis revealed FDC as a better predictor of stroke (H = 5.80, P = 0.016, α = 0.05) compared with SFD (H = 0.51, P = 0.475, α = 0.05) and BC (H = 0.41, P = 0.520, α = 0.05) with overall lower median value for the cases compared to the control group. This work has shown that there is a significant association between zone specific FDC of eye fundus images with future episode of stroke while this difference is not significant when other FD methods are employed.

  9. Fractal analysis of Xylella fastidiosa biofilm formation

    NASA Astrophysics Data System (ADS)

    Moreau, A. L. D.; Lorite, G. S.; Rodrigues, C. M.; Souza, A. A.; Cotta, M. A.

    2009-07-01

    We have investigated the growth process of Xylella fastidiosa biofilms inoculated on a glass. The size and the distance between biofilms were analyzed by optical images; a fractal analysis was carried out using scaling concepts and atomic force microscopy images. We observed that different biofilms show similar fractal characteristics, although morphological variations can be identified for different biofilm stages. Two types of structural patterns are suggested from the observed fractal dimensions Df. In the initial and final stages of biofilm formation, Df is 2.73±0.06 and 2.68±0.06, respectively, while in the maturation stage, Df=2.57±0.08. These values suggest that the biofilm growth can be understood as an Eden model in the former case, while diffusion-limited aggregation (DLA) seems to dominate the maturation stage. Changes in the correlation length parallel to the surface were also observed; these results were correlated with the biofilm matrix formation, which can hinder nutrient diffusion and thus create conditions to drive DLA growth.

  10. Research on the generation of the background with sea and sky in infrared scene

    NASA Astrophysics Data System (ADS)

    Dong, Yan-zhi; Han, Yan-li; Lou, Shu-li

    2008-03-01

    It is important for scene generation to keep the texture of infrared images in simulation of anti-ship infrared imaging guidance. We studied the fractal method and applied it to the infrared scene generation. We adopted the method of horizontal-vertical (HV) partition to encode the original image. Basing on the properties of infrared image with sea-sky background, we took advantage of Local Iteration Function System (LIFS) to decrease the complexity of computation and enhance the processing rate. Some results were listed. The results show that the fractal method can keep the texture of infrared image better and can be used in the infrared scene generation widely in future.

  11. Using Fractal And Morphological Criteria For Automatic Classification Of Lung Diseases

    NASA Astrophysics Data System (ADS)

    Vehel, Jacques Levy

    1989-11-01

    Medical Images are difficult to analyze by means of classical image processing tools because they are very complex and irregular. Such shapes are obtained for instance in Nuclear Medecine with the spatial distribution of activity for organs such as lungs, liver, and heart. We have tried to apply two different theories to these signals: - Fractal Geometry deals with the analysis of complex irregular shapes which cannot well be described by the classical Euclidean geometry. - Integral Geometry treats sets globally and allows to introduce robust measures. We have computed three parameters on three kinds of Lung's SPECT images: normal, pulmonary embolism and chronic desease: - The commonly used fractal dimension (FD), that gives a measurement of the irregularity of the 3D shape. - The generalized lacunarity dimension (GLD), defined as the variance of the ratio of the local activity by the mean activity, which is only sensitive to the distribution and the size of gaps in the surface. - The Favard length that gives an approximation of the surface of a 3-D shape. The results show that each slice of the lung, considered as a 3D surface, is fractal and that the fractal dimension is the same for each slice and for the three kind of lungs; as for the lacunarity and Favard length, they are clearly different for normal lungs, pulmonary embolisms and chronic diseases. These results indicate that automatic classification of Lung's SPECT can be achieved, and that a quantitative measurement of the evolution of the disease could be made.

  12. Heterogeneity of Glucose Metabolism in Esophageal Cancer Measured by Fractal Analysis of Fluorodeoxyglucose Positron Emission Tomography Image: Correlation between Metabolic Heterogeneity and Survival.

    PubMed

    Tochigi, Toru; Shuto, Kiyohiko; Kono, Tsuguaki; Ohira, Gaku; Tohma, Takayuki; Gunji, Hisashi; Hayano, Koichi; Narushima, Kazuo; Fujishiro, Takeshi; Hanaoka, Toshiharu; Akutsu, Yasunori; Okazumi, Shinichi; Matsubara, Hisahiro

    2017-01-01

    Intratumoral heterogeneity is a well-recognized characteristic feature of cancer. The purpose of this study is to assess the heterogeneity of the intratumoral glucose metabolism using fractal analysis, and evaluate its prognostic value in patients with esophageal squamous cell carcinoma (ESCC). 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) studies of 79 patients who received curative surgery were evaluated. FDG-PET images were analyzed using fractal analysis software, where differential box-counting method was employed to calculate the fractal dimension (FD) of the tumor lesion. Maximum standardized uptake value (SUVmax) and FD were compared with overall survival (OS). The median SUVmax and FD of ESCCs in this cohort were 13.8 and 1.95, respectively. In univariate analysis performed using Cox's proportional hazard model, T stage and FD showed significant associations with OS (p = 0.04, p < 0.0001, respectively), while SUVmax did not (p = 0.1). In Kaplan-Meier analysis, the low FD tumor (<1.95) showed a significant association with favorable OS (p < 0.0001). In wthe multivariate analysis among TNM staging, serum tumor markers, FD, and SUVmax, the FD was identified as the only independent prognostic factor for OS (p = 0.0006; hazards ratio 0.251, 95% CI 0.104-0.562). Metabolic heterogeneity measured by fractal analysis can be a novel imaging biomarker for survival in patients with ESCC. © 2016 S. Karger AG, Basel.

  13. Depth of focus enhancement of a modified imaging quasi-fractal zone plate.

    PubMed

    Zhang, Qinqin; Wang, Jingang; Wang, Mingwei; Bu, Jing; Zhu, Siwei; Gao, Bruce Z; Yuan, Xiaocong

    2012-10-01

    We propose a new parameter w for optimization of foci distribution of conventional fractal zone plates (FZPs) with a greater depth of focus (DOF) in imaging. Numerical simulations of DOF distribution on axis directions indicate that the values of DOF can be extended by a factor of 1.5 or more by a modified quasi-FZP. In experiments, we employ a simple object-lens-image-plane arrangement to pick up images at various positions within the DOF of a conventional FZP and a quasi-FZP, respectively. Experimental results show that the parameter w improves foci distribution of FZPs in good agreement with theoretical predictions.

  14. Depth of focus enhancement of a modified imaging quasi-fractal zone plate

    PubMed Central

    Zhang, Qinqin; Wang, Jingang; Wang, Mingwei; Bu, Jing; Zhu, Siwei; Gao, Bruce Z.; Yuan, Xiaocong

    2013-01-01

    We propose a new parameter w for optimization of foci distribution of conventional fractal zone plates (FZPs) with a greater depth of focus (DOF) in imaging. Numerical simulations of DOF distribution on axis directions indicate that the values of DOF can be extended by a factor of 1.5 or more by a modified quasi-FZP. In experiments, we employ a simple object–lens–image-plane arrangement to pick up images at various positions within the DOF of a conventional FZP and a quasi-FZP, respectively. Experimental results show that the parameter w improves foci distribution of FZPs in good agreement with theoretical predictions. PMID:24285908

  15. Classification of diabetic retinopathy using fractal dimension analysis of eye fundus image

    NASA Astrophysics Data System (ADS)

    Safitri, Diah Wahyu; Juniati, Dwi

    2017-08-01

    Diabetes Mellitus (DM) is a metabolic disorder when pancreas produce inadequate insulin or a condition when body resist insulin action, so the blood glucose level is high. One of the most common complications of diabetes mellitus is diabetic retinopathy which can lead to a vision problem. Diabetic retinopathy can be recognized by an abnormality in eye fundus. Those abnormalities are characterized by microaneurysms, hemorrhage, hard exudate, cotton wool spots, and venous's changes. The diabetic retinopathy is classified depends on the conditions of abnormality in eye fundus, that is grade 1 if there is a microaneurysm only in the eye fundus; grade 2, if there are a microaneurysm and a hemorrhage in eye fundus; and grade 3: if there are microaneurysm, hemorrhage, and neovascularization in the eye fundus. This study proposed a method and a process of eye fundus image to classify of diabetic retinopathy using fractal analysis and K-Nearest Neighbor (KNN). The first phase was image segmentation process using green channel, CLAHE, morphological opening, matched filter, masking, and morphological opening binary image. After segmentation process, its fractal dimension was calculated using box-counting method and the values of fractal dimension were analyzed to make a classification of diabetic retinopathy. Tests carried out by used k-fold cross validation method with k=5. In each test used 10 different grade K of KNN. The accuracy of the result of this method is 89,17% with K=3 or K=4, it was the best results than others K value. Based on this results, it can be concluded that the classification of diabetic retinopathy using fractal analysis and KNN had a good performance.

  16. Fractal analysis of scatter imaging signatures to distinguish breast pathologies

    NASA Astrophysics Data System (ADS)

    Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.

    2013-02-01

    Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.

  17. Applications of fractals in ecology.

    PubMed

    Sugihara, G; M May, R

    1990-03-01

    Fractal models describe the geometry of a wide variety of natural objects such as coastlines, island chains, coral reefs, satellite ocean-color images and patches of vegetation. Cast in the form of modified diffusion models, they can mimic natural and artificial landscapes having different types of complexity of shape. This article provides a brief introduction to fractals and reports on how they can be used by ecologists to answer a variety of basic questions, about scale, measurement and hierarchy in, ecological systems. Copyright © 1990. Published by Elsevier Ltd.

  18. [Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].

    PubMed

    Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao

    2014-05-01

    Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.

  19. The Plasma Membrane is Compartmentalized by a Self-Similar Cortical Actin Fractal

    NASA Astrophysics Data System (ADS)

    Sadegh, Sanaz; Higgin, Jenny; Mannion, Patrick; Tamkun, Michael; Krapf, Diego

    A broad range of membrane proteins display anomalous diffusion on the cell surface. Different methods provide evidence for obstructed subdiffusion and diffusion on a fractal space, but the underlying structure inducing anomalous diffusion has never been visualized due to experimental challenges. We addressed this problem by imaging the cortical actin at high resolution while simultaneously tracking individual membrane proteins in live mammalian cells. Our data show that actin introduces barriers leading to compartmentalization of the plasma membrane and that membrane proteins are transiently confined within actin fences. Furthermore, superresolution imaging shows that the cortical actin is organized into a self-similar fractal. These results present a hierarchical nanoscale picture of the plasma membrane and demonstrate direct interactions between the actin cortex and the cell surface.

  20. Jupiter Fractal Art

    NASA Image and Video Library

    2017-08-10

    See Jupiter's Great Red Spot as you've never seen it before in this new Jovian work of art. Artist Mik Petter created this unique, digital artwork using data from the JunoCam imager on NASA's Juno spacecraft. The art form, known as fractals, uses mathematical formulas to create art with an infinite variety of form, detail, color and light. The tumultuous atmospheric zones in and around the Great Red Spot are highlighted by the author's use of colorful fractals. Vibrant colors of various tints and hues, combined with the almost organic-seeming shapes, make this image seem to be a colorized and crowded petri dish of microorganisms, or a close-up view of microscopic and wildly-painted seashells. The original JunoCam image was taken on July 10, 2017 at 7:10 p.m. PDT (10:10 p.m. EDT), as the Juno spacecraft performed its seventh close flyby of Jupiter. The spacecraft captured the image from about 8,648 miles (13,917 kilometers) above the tops of the clouds of the planet at a latitude of -32.6 degrees. https://photojournal.jpl.nasa.gov/catalog/PIA21777

  1. Characterization of microgravity effects on bone structure and strength using fractal analysis

    NASA Technical Reports Server (NTRS)

    Acharya, Raj S.; Shackelford, Linda

    1995-01-01

    The effect of micro-gravity on the musculoskeletal system has been well studied. Significant changes in bone and muscle have been shown after long term space flight. Similar changes have been demonstrated due to bed rest. Bone demineralization is particularly profound in weight bearing bones. Much of the current techniques to monitor bone condition use bone mass measurements. However, bone mass measurements are not reliable to distinguish Osteoporotic and Normal subjects. It has been shown that the overlap between normals and osteoporosis is found for all of the bone mass measurement technologies: single and dual photon absorptiometry, quantitative computed tomography and direct measurement of bone area/volume on biopsy as well as radiogrammetry. A similar discordance is noted in the fact that it has not been regularly possible to find the expected correlation between severity of osteoporosis and degree of bone loss. Structural parameters such as trabecular connectivity have been proposed as features for assessing bone conditions. In this report, we use fractal analysis to characterize bone structure. We show that the fractal dimension computed with MRI images and X-Ray images of the patella are the same. Preliminary experimental results show that the fractal dimension computed from MRI images of vertebrae of human subjects before bedrest is higher than during bedrest.

  2. Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.

    2009-10-01

    This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.

  3. Temporal and spatial variation of morphological descriptors for atmospheric aerosols collected in Mexico City

    NASA Astrophysics Data System (ADS)

    China, S.; Mazzoleni, C.; Dubey, M. K.; Chakrabarty, R. K.; Moosmuller, H.; Onasch, T. B.; Herndon, S. C.

    2010-12-01

    We present an analysis of morphological characteristics of atmospheric aerosol collected during the MILAGRO (Megacity Initiative: Local and Global Research Observations) field campaign that took place in Mexico City in March 2006. The sampler was installed on the Aerodyne mobile laboratory. The aerosol samples were collected on nuclepore clear polycarbonate filters mounted in Costar pop-top membrane holders. More than one hundred filters were collected at different ground sites with different atmospheric and geographical characteristics (urban, sub-urban, mountain-top, industrial, etc.) over a month period. Selected subsets of these filters were analyzed for aerosol morphology using a scanning electron microscope and image analysis techniques. In this study we investigate spatial and temporal variations of aerosol shape descriptors, morphological parameters, and fractal dimension. We also compare the morphological results with other aerosol measurements such as aerosol optical properties(scattering and absorption) and size distribution data. Atmospheric aerosols have different morphological characteristics depending on many parameters such as emission sources, atmospheric formation pathways, aging processes, and aerosol mixing state. The aerosol morphology influences aerosol chemical and mechanical interactions with the environment, physical properties, and radiative effects. In this study, ambient aerosol particles have been classified in different shape groups as spherical, irregularly shaped, and fractal-like aggregates. Different morphological parameters such as aspect ratio, roundness, feret diameter, etc. have been estimated for irregular shaped and spherical particles and for different kinds of soot particles including fresh soot, collapsed and coated soot. Fractal geometry and image processing have been used to obtain morphological characteristics of different soot particles. The number of monomers constituting each aggregate and their diameters were measured and used to estimate an ensemble three-dimensional (3-d) fractal dimension. One-dimensional (1-d) and two-dimensional (2-d) fractal geometries have been measured using a power-law scaling relationship between 1-d and 2-d properties of projected images. Temporal variations in fractal dimension of soot-like aggregates have been observed at the mountaintop site and spatial variation of fractal dimension and other morphological descriptors of different shaped particles have been investigated for the different ground sites.

  4. A Complex Story: Universal Preference vs. Individual Differences Shaping Aesthetic Response to Fractals Patterns

    PubMed Central

    Street, Nichola; Forsythe, Alexandra M.; Reilly, Ronan; Taylor, Richard; Helmy, Mai S.

    2016-01-01

    Fractal patterns offer one way to represent the rough complexity of the natural world. Whilst they dominate many of our visual experiences in nature, little large-scale perceptual research has been done to explore how we respond aesthetically to these patterns. Previous research (Taylor et al., 2011) suggests that the fractal patterns with mid-range fractal dimensions (FDs) have universal aesthetic appeal. Perceptual and aesthetic responses to visual complexity have been more varied with findings suggesting both linear (Forsythe et al., 2011) and curvilinear (Berlyne, 1970) relationships. Individual differences have been found to account for many of the differences we see in aesthetic responses but some, such as culture, have received little attention within the fractal and complexity research fields. This two-study article aims to test preference responses to FD and visual complexity, using a large cohort (N = 443) of participants from around the world to allow universality claims to be tested. It explores the extent to which age, culture and gender can predict our preferences for fractally complex patterns. Following exploratory analysis that found strong correlations between FD and visual complexity, a series of linear mixed-effect models were implemented to explore if each of the individual variables could predict preference. The first tested a linear complexity model (likelihood of selecting the more complex image from the pair of images) and the second a mid-range FD model (likelihood of selecting an image within mid-range). Results show that individual differences can reliably predict preferences for complexity across culture, gender and age. However, in fitting with current findings the mid-range models show greater consistency in preference not mediated by gender, age or culture. This article supports the established theory that the mid-range fractal patterns appear to be a universal construct underlying preference but also highlights the fragility of universal claims by demonstrating individual differences in preference for the interrelated concept of visual complexity. This highlights a current stalemate in the field of empirical aesthetics. PMID:27252634

  5. Turbulence Enhancement by Fractal Square Grids: Effects of the Number of Fractal Scales

    NASA Astrophysics Data System (ADS)

    Omilion, Alexis; Ibrahim, Mounir; Zhang, Wei

    2017-11-01

    Fractal square grids offer a unique solution for passive flow control as they can produce wakes with a distinct turbulence intensity peak and a prolonged turbulence decay region at the expense of only minimal pressure drop. While previous studies have solidified this characteristic of fractal square grids, how the number of scales (or fractal iterations N) affect turbulence production and decay of the induced wake is still not well understood. The focus of this research is to determine the relationship between the fractal iteration N and the turbulence produced in the wake flow using well-controlled water-tunnel experiments. Particle Image Velocimetry (PIV) is used to measure the instantaneous velocity fields downstream of four different fractal grids with increasing number of scales (N = 1, 2, 3, and 4) and a conventional single-scale grid. By comparing the turbulent scales and statistics of the wake, we are able to determine how each iteration affects the peak turbulence intensity and the production/decay of turbulence from the grid. In light of the ability of these fractal grids to increase turbulence intensity with low pressure drop, this work can potentially benefit a wide variety of applications where energy efficient mixing or convective heat transfer is a key process.

  6. Fractal characterization and wettability of ion treated silicon surfaces

    NASA Astrophysics Data System (ADS)

    Yadav, R. P.; Kumar, Tanuj; Baranwal, V.; Vandana, Kumar, Manvendra; Priya, P. K.; Pandey, S. N.; Mittal, A. K.

    2017-02-01

    Fractal characterization of surface morphology can be useful as a tool for tailoring the wetting properties of solid surfaces. In this work, rippled surfaces of Si (100) are grown using 200 keV Ar+ ion beam irradiation at different ion doses. Relationship between fractal and wetting properties of these surfaces are explored. The height-height correlation function extracted from atomic force microscopic images, demonstrates an increase in roughness exponent with an increase in ion doses. A steep variation in contact angle values is found for low fractal dimensions. Roughness exponent and fractal dimensions are found correlated with the static water contact angle measurement. It is observed that after a crossover of the roughness exponent, the surface morphology has a rippled structure. Larger values of interface width indicate the larger ripples on the surface. The contact angle of water drops on such surfaces is observed to be lowest. Autocorrelation function is used for the measurement of ripple wavelength.

  7. A new set of wavelet- and fractals-based features for Gleason grading of prostate cancer histopathology images

    NASA Astrophysics Data System (ADS)

    Mosquera Lopez, Clara; Agaian, Sos

    2013-02-01

    Prostate cancer detection and staging is an important step towards patient treatment selection. Advancements in digital pathology allow the application of new quantitative image analysis algorithms for computer-assisted diagnosis (CAD) on digitized histopathology images. In this paper, we introduce a new set of features to automatically grade pathological images using the well-known Gleason grading system. The goal of this study is to classify biopsy images belonging to Gleason patterns 3, 4, and 5 by using a combination of wavelet and fractal features. For image classification we use pairwise coupling Support Vector Machine (SVM) classifiers. The accuracy of the system, which is close to 97%, is estimated through three different cross-validation schemes. The proposed system offers the potential for automating classification of histological images and supporting prostate cancer diagnosis.

  8. Design and validation of realistic breast models for use in multiple alternative forced choice virtual clinical trials

    NASA Astrophysics Data System (ADS)

    Elangovan, Premkumar; Mackenzie, Alistair; Dance, David R.; Young, Kenneth C.; Cooke, Victoria; Wilkinson, Louise; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Wells, Kevin

    2017-04-01

    A novel method has been developed for generating quasi-realistic voxel phantoms which simulate the compressed breast in mammography and digital breast tomosynthesis (DBT). The models are suitable for use in virtual clinical trials requiring realistic anatomy which use the multiple alternative forced choice (AFC) paradigm and patches from the complete breast image. The breast models are produced by extracting features of breast tissue components from DBT clinical images including skin, adipose and fibro-glandular tissue, blood vessels and Cooper’s ligaments. A range of different breast models can then be generated by combining these components. Visual realism was validated using a receiver operating characteristic (ROC) study of patches from simulated images calculated using the breast models and from real patient images. Quantitative analysis was undertaken using fractal dimension and power spectrum analysis. The average areas under the ROC curves for 2D and DBT images were 0.51  ±  0.06 and 0.54  ±  0.09 demonstrating that simulated and real images were statistically indistinguishable by expert breast readers (7 observers); errors represented as one standard error of the mean. The average fractal dimensions (2D, DBT) for real and simulated images were (2.72  ±  0.01, 2.75  ±  0.01) and (2.77  ±  0.03, 2.82  ±  0.04) respectively; errors represented as one standard error of the mean. Excellent agreement was found between power spectrum curves of real and simulated images, with average β values (2D, DBT) of (3.10  ±  0.17, 3.21  ±  0.11) and (3.01  ±  0.32, 3.19  ±  0.07) respectively; errors represented as one standard error of the mean. These results demonstrate that radiological images of these breast models realistically represent the complexity of real breast structures and can be used to simulate patches from mammograms and DBT images that are indistinguishable from patches from the corresponding real breast images. The method can generate about 500 radiological patches (~30 mm  ×  30 mm) per day for AFC experiments on a single workstation. This is the first study to quantitatively validate the realism of simulated radiological breast images using direct blinded comparison with real data via the ROC paradigm with expert breast readers.

  9. Fractal Dimensionality of Pore and Grain Volume of a Siliciclastic Marine Sand

    NASA Astrophysics Data System (ADS)

    Reed, A. H.; Pandey, R. B.; Lavoie, D. L.

    Three-dimensional (3D) spatial distributions of pore and grain volumes were determined from high-resolution computer tomography (CT) images of resin-impregnated marine sands. Using a linear gradient extrapolation method, cubic three-dimensional samples were constructed from two-dimensional CT images. Image porosity (0.37) was found to be consistent with the estimate of porosity by water weight loss technique (0.36). Scaling of the pore volume (Vp) with the linear size (L), V~LD provides the fractal dimensionalities of the pore volume (D=2.74+/-0.02) and grain volume (D=2.90+/-0.02) typical for sedimentary materials.

  10. Fractal evaluation of drug amorphicity from optical and scanning electron microscope images

    NASA Astrophysics Data System (ADS)

    Gavriloaia, Bogdan-Mihai G.; Vizireanu, Radu C.; Neamtu, Catalin I.; Gavriloaia, Gheorghe V.

    2013-09-01

    Amorphous materials are metastable, more reactive than the crystalline ones, and have to be evaluated before pharmaceutical compound formulation. Amorphicity is interpreted as a spatial chaos, and patterns of molecular aggregates of dexamethasone, D, were investigated in this paper by using fractal dimension, FD. Images having three magnifications of D were taken from an optical microscope, OM, and with eight magnifications, from a scanning electron microscope, SEM, were analyzed. The average FD for pattern irregularities of OM images was 1.538, and about 1.692 for SEM images. The FDs of the two kinds of images are less sensitive of threshold level. 3D images were shown to illustrate dependence of FD of threshold and magnification level. As a result, optical image of single scale is enough to characterize the drug amorphicity. As a result, the OM image at a single scale is enough to characterize the amorphicity of D.

  11. Higuchi Dimension of Digital Images

    PubMed Central

    Ahammer, Helmut

    2011-01-01

    There exist several methods for calculating the fractal dimension of objects represented as 2D digital images. For example, Box counting, Minkowski dilation or Fourier analysis can be employed. However, there appear to be some limitations. It is not possible to calculate only the fractal dimension of an irregular region of interest in an image or to perform the calculations in a particular direction along a line on an arbitrary angle through the image. The calculations must be made for the whole image. In this paper, a new method to overcome these limitations is proposed. 2D images are appropriately prepared in order to apply 1D signal analyses, originally developed to investigate nonlinear time series. The Higuchi dimension of these 1D signals is calculated using Higuchi's algorithm, and it is shown that both regions of interests and directional dependencies can be evaluated independently of the whole picture. A thorough validation of the proposed technique and a comparison of the new method to the Fourier dimension, a common two dimensional method for digital images, are given. The main result is that Higuchi's algorithm allows a direction dependent as well as direction independent analysis. Actual values for the fractal dimensions are reliable and an effective treatment of regions of interests is possible. Moreover, the proposed method is not restricted to Higuchi's algorithm, as any 1D method of analysis, can be applied. PMID:21931854

  12. Mathematical morphology for automated analysis of remotely sensed objects in radar images

    NASA Technical Reports Server (NTRS)

    Daida, Jason M.; Vesecky, John F.

    1991-01-01

    A symbiosis of pyramidal segmentation and morphological transmission is described. The pyramidal segmentation portion of the symbiosis has resulted in low (2.6 percent) misclassification error rate for a one-look simulation. Other simulations indicate lower error rates (1.8 percent for a four-look image). The morphological transformation portion has resulted in meaningful partitions with a minimal loss of fractal boundary information. An unpublished version of Thicken, suitable for watersheds transformations of fractal objects, is also presented. It is demonstrated that the proposed symbiosis works with SAR (synthetic aperture radar) images: in this case, a four-look Seasat image of sea ice. It is concluded that the symbiotic forms of both segmentation and morphological transformation seem well suited for unsupervised geophysical analysis.

  13. Alpha-spectrometry and fractal analysis of surface micro-images for characterisation of porous materials used in manufacture of targets for laser plasma experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aushev, A A; Barinov, S P; Vasin, M G

    2015-06-30

    We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied bymore » alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples. (laser plasma)« less

  14. Assessment of changes in crystallization properties of pressurized milk fat.

    PubMed

    Staniewski, Bogusław; Smoczyński, Michał; Staniewska, Katarzyna; Baranowska, Maria; Kiełczewska, Katarzyna; Zulewska, Justyna

    2015-04-01

    The aim of the study was to demonstrate the use of fractal image analysis as a possible tool to monitor the effect of pressurization on the crystallization pattern of anhydrous milk fat. This approach can be useful when developing new products based on milk fat. The samples were subjected to different hydrostatic pressure (100, 200, 300, and 400 MPa) and temperature (10 and 40 °C) treatments. The crystallization microphotographs were taken with a scanning electron microscope. The image analysis of scanning electron microscope photographs was done to determine a fractal dimension. Milk-fat pressurization under the applied parameters resulted in slight, but statistically significant, changes in the course of crystallization curves, related to the triacylglycerol fraction crystallizing in the lowest temperature (I exothermic effect). These changes were dependent on the value of pressure but not dependent on the temperatures applied during the process of pressurization (at either 10 or 40 °C). In turn, significant differences were observed in crystallization images of milk-fat samples subjected to this process compared with the control sample. The results of additional fractal analysis additionally demonstrated the highest degree of irregularity of the surface of the crystalline form for the nonpressurized sample and the samples pressurized at 200 and 300 MPa at 10 °C. The lowest value of fractal dimension-indicative of the least irregularity-was achieved for the fat samples pressurized at 400 MPa, 10 °C and at 100 MPa, 40 °C. The possibilities of wider application of the fractal analysis for the evaluation of effects of parameters of various technological processes on crystallization properties of milk fat require further extensive investigations. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Fractal Geometry Enables Classification of Different Lung Morphologies in a Model of Experimental Asthma

    NASA Astrophysics Data System (ADS)

    Obert, Martin; Hagner, Stefanie; Krombach, Gabriele A.; Inan, Selcuk; Renz, Harald

    2015-06-01

    Animal models represent the basis of our current understanding of the pathophysiology of asthma and are of central importance in the preclinical development of drug therapies. The characterization of irregular lung shapes is a major issue in radiological imaging of mice in these models. The aim of this study was to find out whether differences in lung morphology can be described by fractal geometry. Healthy and asthmatic mouse groups, before and after an acute asthma attack induced by methacholine, were studied. In vivo flat-panel-based high-resolution Computed Tomography (CT) was used for mice's thorax imaging. The digital image data of the mice's lungs were segmented from the surrounding tissue. After that, the lungs were divided by image gray-level thresholds into two additional subsets. One subset contained basically the air transporting bronchial system. The other subset corresponds mainly to the blood vessel system. We estimated the fractal dimension of all sets of the different mouse groups using the mass radius relation (mrr). We found that the air transporting subset of the bronchial lung tissue enables a complete and significant differentiation between all four mouse groups (mean D of control mice before methacholine treatment: 2.64 ± 0.06; after treatment: 2.76 ± 0.03; asthma mice before methacholine treatment: 2.37 ± 0.16; after treatment: 2.71 ± 0.03; p < 0.05). We conclude that the concept of fractal geometry allows a well-defined, quantitative numerical and objective differentiation of lung shapes — applicable most likely also in human asthma diagnostics.

  16. Investigation of changes in fractal dimension from layered retinal structures of healthy and diabetic eyes with optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zakharov, Valery P.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Artemyev, Dmitry N.; Kornilin, Dmitry V.

    2015-07-01

    Optical coherence tomography (OCT) is usually employed for the measurement of retinal thickness characterizing the structural changes of tissue. However, fractal dimension (FD) could also character the structural changes of tissue. Therefore, fractal dimension changes may provide further information regarding cellular layers and early damage in ocular diseases. We investigated the possibility of OCT in detecting changes in fractal dimension from layered retinal structures. OCT images were obtained from diabetic patients without retinopathy (DM, n = 38 eyes) or mild diabetic retinopathy (MDR, n = 43 eyes) and normal healthy subjects (Controls, n = 74 eyes). Fractal dimension was calculated using the differentiate box counting methodology. We evaluated the usefulness of quantifying fractal dimension of layered structures in the detection of retinal damage. Generalized estimating equations considering within-subject intereye relations were used to test for differences between the groups. A modified p value of <0.001 was considered statistically significant. Receiver operating characteristic (ROC) curves were constructed to describe the ability of fractal dimension to discriminate between the eyes of DM, MDR and healthy eyes. Significant decreases of fractal dimension were observed in all layers in the MDR eyes compared with controls except in the inner nuclear layer (INL). Significant decreases of fractal dimension were also observed in all layers in the MDR eyes compared with DM eyes. The highest area under receiver operating characteristic curve (AUROC) values estimated for fractal dimension were observed for the outer plexiform layer (OPL) and outer segment photoreceptors (OS) when comparing MDR eyes with controls. The highest AUROC value estimated for fractal dimension were also observed for the retinal nerve fiber layer (RNFL) and OS when comparing MDR eyes with DM eyes. Our results suggest that fractal dimension of the intraretinal layers may provide useful information to differentiate pathological from healthy eyes. Further research is warranted to determine how this approach may be used to improve diagnosis of early retinal neurodegeneration.

  17. Assessment of disintegrant efficacy with fractal dimensions from real-time MRI.

    PubMed

    Quodbach, Julian; Moussavi, Amir; Tammer, Roland; Frahm, Jens; Kleinebudde, Peter

    2014-11-20

    An efficient disintegrant is capable of breaking up a tablet in the smallest possible particles in the shortest time. Until now, comparative data on the efficacy of different disintegrants is based on dissolution studies or the disintegration time. Extending these approaches, this study introduces a method, which defines the evolution of fractal dimensions of tablets as surrogate parameter for the available surface area. Fractal dimensions are a measure for the tortuosity of a line, in this case the upper surface of a disintegrating tablet. High-resolution real-time MRI was used to record videos of disintegrating tablets. The acquired video images were processed to depict the upper surface of the tablets and a box-counting algorithm was used to estimate the fractal dimensions. The influence of six different disintegrants, of different relative tablet density, and increasing disintegrant concentration was investigated to evaluate the performance of the novel method. Changing relative densities hardly affect the progression of fractal dimensions, whereas an increase in disintegrant concentration causes increasing fractal dimensions during disintegration, which are also reached quicker. Different disintegrants display only minor differences in the maximal fractal dimension, yet the kinetic in which the maximum is reached allows a differentiation and classification of disintegrants. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Seeing shapes in seemingly random spatial patterns: Fractal analysis of Rorschach inkblots

    PubMed Central

    Taylor, R. P.; Martin, T. P.; Montgomery, R. D.; Smith, J. H.; Micolich, A. P.; Boydston, C.; Scannell, B. C.; Fairbanks, M. S.; Spehar, B.

    2017-01-01

    Rorschach inkblots have had a striking impact on the worlds of art and science because of the remarkable variety of associations with recognizable and namable objects they induce. Originally adopted as a projective psychological tool to probe mental health, psychologists and artists have more recently interpreted the variety of induced images simply as a signature of the observers’ creativity. Here we analyze the relationship between the spatial scaling parameters of the inkblot patterns and the number of induced associations, and suggest that the perceived images are induced by the fractal characteristics of the blot edges. We discuss how this relationship explains the frequent observation of images in natural scenery. PMID:28196082

  19. Retinal vasculature classification using novel multifractal features

    NASA Astrophysics Data System (ADS)

    Ding, Y.; Ward, W. O. C.; Duan, Jinming; Auer, D. P.; Gowland, Penny; Bai, L.

    2015-11-01

    Retinal blood vessels have been implicated in a large number of diseases including diabetic retinopathy and cardiovascular diseases, which cause damages to retinal blood vessels. The availability of retinal vessel imaging provides an excellent opportunity for monitoring and diagnosis of retinal diseases, and automatic analysis of retinal vessels will help with the processes. However, state of the art vascular analysis methods such as counting the number of branches or measuring the curvature and diameter of individual vessels are unsuitable for the microvasculature. There has been published research using fractal analysis to calculate fractal dimensions of retinal blood vessels, but so far there has been no systematic research extracting discriminant features from retinal vessels for classifications. This paper introduces new methods for feature extraction from multifractal spectra of retinal vessels for classification. Two publicly available retinal vascular image databases are used for the experiments, and the proposed methods have produced accuracies of 85.5% and 77% for classification of healthy and diabetic retinal vasculatures. Experiments show that classification with multiple fractal features produces better rates compared with methods using a single fractal dimension value. In addition to this, experiments also show that classification accuracy can be affected by the accuracy of vessel segmentation algorithms.

  20. Image encryption based on fractal-structured phase mask in fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Zhao, Meng-Dan; Gao, Xu-Zhen; Pan, Yue; Zhang, Guan-Lin; Tu, Chenghou; Li, Yongnan; Wang, Hui-Tian

    2018-04-01

    We present an optical encryption approach based on the combination of fractal Fresnel lens (FFL) and fractional Fourier transform (FrFT). Our encryption approach is in fact a four-fold encryption scheme, including the random phase encoding produced by the Gerchberg–Saxton algorithm, a FFL, and two FrFTs. A FFL is composed of a Sierpinski carpet fractal plate and a Fresnel zone plate. In our encryption approach, the security is enhanced due to the more expandable key spaces and the use of FFL overcomes the alignment problem of the optical axis in optical system. Only using the perfectly matched parameters of the FFL and the FrFT, the plaintext can be recovered well. We present an image encryption algorithm that from the ciphertext we can get two original images by the FrFT with two different phase distribution keys, obtained by performing 100 iterations between the two plaintext and ciphertext, respectively. We test the sensitivity of our approach to various parameters such as the wavelength of light, the focal length of FFL, and the fractional orders of FrFT. Our approach can resist various attacks.

  1. Microtopographic Inspection and Fractal Analysis of Skin Neoplasia

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F. M.; Hipolito, Alberto Valencia; Gutierrez, Gustavo Fidel; Chanona, Jorge; Gallegos, Eva Ramón

    2008-04-01

    Early detection of skin cancer is fundamental to a successful treatment. Changes in the shape, including the relief, of skin lesions are an indicator of a possible malignity. Optical microtopographic inspection of skin lesions can be used to identify diagnostic patterns of benign and malign skin' lesions. Statistical parameters like the mean roughness (Ra) may allow the discrimination between different types of lesions and degree of malignity. Fractal analysis of bi-dimensional and 3D images of skin lesions can validate or complement that assessment by calculation of its fractal dimensions (FD). On the study herein reported the microtopographic inspection of the skin lesions were performed using the optical triangulation based microtopographer developed at the Physics Department of the University of Minho, MICROTOP.03.MFC. The patients that participated in this research work were men and women older than 15 years with the clinical and histopathology diagnoses of: melanoma, basocellular carcinoma, epidermoide carcinoma, actinic keratosis, keratoacantosis and benign nevus. Latex impressions of the lesions were taken and microtopographically analyzed. Characteristic information for each type of studied lesion was obtained. For melanoma it was observed that on the average these tumors present an increased roughness of around 67 percent compared to the roughness of the healthy skin. This feature allows the distinction from other tumors as basocellular carcinoma (were the roughness increase was in the average of 49 percent) and benign lesions as the epidermoide cyst (37 percent) or the seborrhea keratosis (4 percent). Tumor size and roughness are directly proportional to the grade of malignality. The characterization of the fractal geometry of 2D (histological slides) and 3D images of skin lesions was performed by obtaining its FD evaluated by means of the Box counting method. Results obtained showed that the average fractal dimension of histological slide images (FDh) corresponding to some neoplasia is higher (1.334+/-0.072) than those for healthy skin (1.091+/-0.082). A significant difference between the fractal dimensions of neoplasia and healhty skin (>0.001) was registered. The FD of microtopography maps (FDm) can also distinguish between healthy and malignant tissue in general (2.277+/-0.070 to 2.309+/-0.040), but not discriminate the different types of skin neoplasias. The combination of the rugometric evaluation and fractal geometry characterization provides valuable information about the malignity of skin lesions and type of lesion.

  2. Analysis and classification of commercial ham slice images using directional fractal dimension features.

    PubMed

    Mendoza, Fernando; Valous, Nektarios A; Allen, Paul; Kenny, Tony A; Ward, Paddy; Sun, Da-Wen

    2009-02-01

    This paper presents a novel and non-destructive approach to the appearance characterization and classification of commercial pork, turkey and chicken ham slices. Ham slice images were modelled using directional fractal (DF(0°;45°;90°;135°)) dimensions and a minimum distance classifier was adopted to perform the classification task. Also, the role of different colour spaces and the resolution level of the images on DF analysis were investigated. This approach was applied to 480 wafer thin ham slices from four types of hams (120 slices per type): i.e., pork (cooked and smoked), turkey (smoked) and chicken (roasted). DF features were extracted from digitalized intensity images in greyscale, and R, G, B, L(∗), a(∗), b(∗), H, S, and V colour components for three image resolution levels (100%, 50%, and 25%). Simulation results show that in spite of the complexity and high variability in colour and texture appearance, the modelling of ham slice images with DF dimensions allows the capture of differentiating textural features between the four commercial ham types. Independent DF features entail better discrimination than that using the average of four directions. However, DF dimensions reveal a high sensitivity to colour channel, orientation and image resolution for the fractal analysis. The classification accuracy using six DF dimension features (a(90°)(∗),a(135°)(∗),H(0°),H(45°),S(0°),H(90°)) was 93.9% for training data and 82.2% for testing data.

  3. Wavelet-based 3D reconstruction of microcalcification clusters from two mammographic views: new evidence that fractal tumors are malignant and Euclidean tumors are benign.

    PubMed

    Batchelder, Kendra A; Tanenbaum, Aaron B; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre

    2014-01-01

    The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the "CC-MLO fractal dimension plot", where a "fractal zone" and "Euclidean zones" (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue.

  4. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  5. Fractal analysis of the ischemic transition region in chronic ischemic heart disease using magnetic resonance imaging.

    PubMed

    Michallek, Florian; Dewey, Marc

    2017-04-01

    To introduce a novel hypothesis and method to characterise pathomechanisms underlying myocardial ischemia in chronic ischemic heart disease by local fractal analysis (FA) of the ischemic myocardial transition region in perfusion imaging. Vascular mechanisms to compensate ischemia are regulated at various vascular scales with their superimposed perfusion pattern being hypothetically self-similar. Dedicated FA software ("FraktalWandler") has been developed. Fractal dimensions during first-pass (FD first-pass ) and recirculation (FD recirculation ) are hypothesised to indicate the predominating pathomechanism and ischemic severity, respectively. Twenty-six patients with evidence of myocardial ischemia in 108 ischemic myocardial segments on magnetic resonance imaging (MRI) were analysed. The 40th and 60th percentiles of FD first-pass were used for pathomechanical classification, assigning lesions with FD first-pass  ≤ 2.335 to predominating coronary microvascular dysfunction (CMD) and ≥2.387 to predominating coronary artery disease (CAD). Optimal classification point in ROC analysis was FD first-pass  = 2.358. FD recirculation correlated moderately with per cent diameter stenosis in invasive coronary angiography in lesions classified CAD (r = 0.472, p = 0.001) but not CMD (r = 0.082, p = 0.600). The ischemic transition region may provide information on pathomechanical composition and severity of myocardial ischemia. FA of this region is feasible and may improve diagnosis compared to traditional noninvasive myocardial perfusion analysis. • A novel hypothesis and method is introduced to pathophysiologically characterise myocardial ischemia. • The ischemic transition region appears a meaningful diagnostic target in perfusion imaging. • Fractal analysis may characterise pathomechanical composition and severity of myocardial ischemia.

  6. Polymer Self-Assembly into Unique Fractal Nanostructures in Solution by a One-Shot Synthetic Procedure.

    PubMed

    Shin, Suyong; Gu, Ming-Long; Yu, Chin-Yang; Jeon, Jongseol; Lee, Eunji; Choi, Tae-Lim

    2018-01-10

    A fractal nanostructure having a high surface area is potentially useful in sensors, catalysts, functional coatings, and biomedical and electronic applications. Preparation of fractal nanostructures on solid substrates has been reported using various inorganic or organic compounds. However, achieving such a process using polymers in solution has been extremely challenging. Here, we report a simple one-shot preparation of polymer fractal nanostructures in solution via an unprecedented assembly mechanism controlled by polymerization and self-assembly kinetics. This was possible only because one monomer was significantly more reactive than the other, thereby easily forming a diblock copolymer microstructure. Then, the second insoluble block containing poly(p-phenylenevinylene) (PPV) without any side chains spontaneously underwent self-assembly during polymerization by an in situ nanoparticlization of conjugated polymers (INCP) method. The formation of fractal structures in solution was confirmed by various imaging techniques such as atomic force microscopy, transmission electron microscopy (TEM), and cryogenic TEM. The diffusion-limited aggregation theory was adopted to explain the branching patterns of the fractal nanostructures according to the changes in polymerization conditions such as the monomer concentration and the presence of additives. Finally, after detailed kinetic analyses, we proposed a plausible mechanism for the formation of unique fractal nanostructures, where the gradual formation and continuous growth of micelles in a chain-growth-like manner were accounted for.

  7. Fractal cometary dust - a window into the early Solar system

    NASA Astrophysics Data System (ADS)

    Mannel, T.; Bentley, M. S.; Schmied, R.; Jeszenszky, H.; Levasseur-Regourd, A. C.; Romstedt, J.; Torkar, K.

    2016-11-01

    The properties of dust in the protoplanetary disc are key to understanding the formation of planets in our Solar system. Many models of dust growth predict the development of fractal structures which evolve into non-fractal, porous dust pebbles representing the main component for planetesimal accretion. In order to understand comets and their origins, the Rosetta orbiter followed comet 67P/Churyumov-Gerasimenko for over two years and carried a dedicated instrument suite for dust analysis. One of these instruments, the MIDAS (Micro-Imaging Dust Analysis System) atomic force microscope, recorded the 3D topography of micro- to nanometre-sized dust. All particles analysed to date have been found to be hierarchical agglomerates. Most show compact packing; however, one is extremely porous. This paper contains a structural description of a compact aggregate and the outstanding porous one. Both particles are tens of micrometres in size and show rather narrow subunit size distributions with noticeably similar mean values of 1.48^{+0.13}_{-0.59} μm for the porous particle and 1.36^{+0.15}_{-0.59} μm for the compact. The porous particle allows a fractal analysis, where a density-density correlation function yields a fractal dimension of Df = 1.70 ± 0.1. GIADA, another dust analysis instrument on board Rosetta, confirms the existence of a dust population with a similar fractal dimension. The fractal particles are interpreted as pristine agglomerates built in the protoplanetary disc and preserved in the comet. The similar subunits of both fractal and compact dust indicate a common origin which is, given the properties of the fractal, dominated by slow agglomeration of equally sized aggregates known as cluster-cluster agglomeration.

  8. Taxonomy of Individual Variations in Aesthetic Responses to Fractal Patterns

    PubMed Central

    Spehar, Branka; Walker, Nicholas; Taylor, Richard P.

    2016-01-01

    In two experiments, we investigate group and individual preferences in a range of different types of patterns with varying fractal-like scaling characteristics. In Experiment 1, we used 1/f filtered grayscale images as well as their thresholded (black and white) and edges only counterparts. Separate groups of observers viewed different types of images varying in slope of their amplitude spectra. Although with each image type, the groups exhibited the “universal” pattern of preference for intermediate amplitude spectrum slopes, we identified 4 distinct sub-groups in each case. Sub-group 1 exhibited a typical peak preference for intermediate amplitude spectrum slopes (“intermediate”; approx. 50%); sub-group 2 exhibited a linear increase in preference with increasing amplitude spectrum slope (“smooth”; approx. 20%), while sub-group 3 exhibited a linear decrease in preference as a function of the amplitude spectrum slope (“sharp”; approx. 20%). Sub-group 4 revealed no significant preference (“other”; approx. 10%). In Experiment 2, we extended the range of different image types and investigated preferences within the same observers. We replicate the results of our first experiment and show that individual participants exhibit stable patterns of preference across a wide range of image types. In both experiments, Q-mode factor analysis identified two principal factors that were able to explain more than 80% of interindividual variations in preference across all types of images, suggesting a highly similar dimensional structure of interindividual variations in preference for fractal-like scaling characteristics. PMID:27458365

  9. Fractal design concepts for stretchable electronics.

    PubMed

    Fan, Jonathan A; Yeo, Woon-Hong; Su, Yewang; Hattori, Yoshiaki; Lee, Woosik; Jung, Sung-Young; Zhang, Yihui; Liu, Zhuangjian; Cheng, Huanyu; Falgout, Leo; Bajema, Mike; Coleman, Todd; Gregoire, Dan; Larsen, Ryan J; Huang, Yonggang; Rogers, John A

    2014-01-01

    Stretchable electronics provide a foundation for applications that exceed the scope of conventional wafer and circuit board technologies due to their unique capacity to integrate with soft materials and curvilinear surfaces. The range of possibilities is predicated on the development of device architectures that simultaneously offer advanced electronic function and compliant mechanics. Here we report that thin films of hard electronic materials patterned in deterministic fractal motifs and bonded to elastomers enable unusual mechanics with important implications in stretchable device design. In particular, we demonstrate the utility of Peano, Greek cross, Vicsek and other fractal constructs to yield space-filling structures of electronic materials, including monocrystalline silicon, for electrophysiological sensors, precision monitors and actuators, and radio frequency antennas. These devices support conformal mounting on the skin and have unique properties such as invisibility under magnetic resonance imaging. The results suggest that fractal-based layouts represent important strategies for hard-soft materials integration.

  10. Fractal design concepts for stretchable electronics

    NASA Astrophysics Data System (ADS)

    Fan, Jonathan A.; Yeo, Woon-Hong; Su, Yewang; Hattori, Yoshiaki; Lee, Woosik; Jung, Sung-Young; Zhang, Yihui; Liu, Zhuangjian; Cheng, Huanyu; Falgout, Leo; Bajema, Mike; Coleman, Todd; Gregoire, Dan; Larsen, Ryan J.; Huang, Yonggang; Rogers, John A.

    2014-02-01

    Stretchable electronics provide a foundation for applications that exceed the scope of conventional wafer and circuit board technologies due to their unique capacity to integrate with soft materials and curvilinear surfaces. The range of possibilities is predicated on the development of device architectures that simultaneously offer advanced electronic function and compliant mechanics. Here we report that thin films of hard electronic materials patterned in deterministic fractal motifs and bonded to elastomers enable unusual mechanics with important implications in stretchable device design. In particular, we demonstrate the utility of Peano, Greek cross, Vicsek and other fractal constructs to yield space-filling structures of electronic materials, including monocrystalline silicon, for electrophysiological sensors, precision monitors and actuators, and radio frequency antennas. These devices support conformal mounting on the skin and have unique properties such as invisibility under magnetic resonance imaging. The results suggest that fractal-based layouts represent important strategies for hard-soft materials integration.

  11. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  12. Correlating brain blood oxygenation level dependent (BOLD) fractal dimension mapping with magnetic resonance spectroscopy (MRS) in Alzheimer's disease.

    PubMed

    Warsi, Mohammed A; Molloy, William; Noseworthy, Michael D

    2012-10-01

    To correlate temporal fractal structure of resting state blood oxygen level dependent (rsBOLD) functional magnetic resonance imaging (fMRI) with in vivo proton magnetic resonance spectroscopy ((1)H-MRS), in Alzheimer's disease (AD) and healthy age-matched normal controls (NC). High temporal resolution (4 Hz) rsBOLD signal and single voxel (left putamen) magnetic resonance spectroscopy data was acquired in 33 AD patients and 13 NC. The rsBOLD data was analyzed using two types of fractal dimension (FD) analysis based on relative dispersion and frequency power spectrum. Comparisons in FD were performed between AD and NC, and FD measures were correlated with (1)H-MRS findings. Temporal fractal analysis of rsBOLD, was able to differentiate AD from NC subjects (P = 0.03). Low FD correlated with markers of AD severity including decreased concentrations of N-acetyl aspartate (R = 0.44, P = 0.015) and increased myoinositol (mI) (R = -0.45, P = 0.012). Based on these results we suggest fractal analysis of rsBOLD could provide an early marker of AD.

  13. Synthetic Minority Oversampling Technique and Fractal Dimension for Identifying Multiple Sclerosis

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Dong; Zhang, Yin; Phillips, Preetha; Dong, Zhengchao; Wang, Shuihua

    Multiple sclerosis (MS) is a severe brain disease. Early detection can provide timely treatment. Fractal dimension can provide statistical index of pattern changes with scale at a given brain image. In this study, our team used susceptibility weighted imaging technique to obtain 676 MS slices and 880 healthy slices. We used synthetic minority oversampling technique to process the unbalanced dataset. Then, we used Canny edge detector to extract distinguishing edges. The Minkowski-Bouligand dimension was a fractal dimension estimation method and used to extract features from edges. Single hidden layer neural network was used as the classifier. Finally, we proposed a three-segment representation biogeography-based optimization to train the classifier. Our method achieved a sensitivity of 97.78±1.29%, a specificity of 97.82±1.60% and an accuracy of 97.80±1.40%. The proposed method is superior to seven state-of-the-art methods in terms of sensitivity and accuracy.

  14. Fractal analysis of phasic laser images of the myocardium for the purpose of diagnostics of acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Wanchuliak, O. Y.; Bachinskyi, V. T.

    2011-09-01

    In this work on the base of Mueller-matrix description of optical anisotropy, the possibility of monitoring of time changes of myocardium tissue birefringence, has been considered. The optical model of polycrystalline networks of myocardium is suggested. The results of investigating the interrelation between the values correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the distributions of Mueller matrix elements in the points of laser images of myocardium histological sections. The criteria of differentiation of death coming reasons are determined.

  15. Texture Classification by Texton: Statistical versus Binary

    PubMed Central

    Guo, Zhenhua; Zhang, Zhongcheng; Li, Xiu; Li, Qin; You, Jane

    2014-01-01

    Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8), image patch (Statistical_Joint) and locally invariant fractal (Statistical_Fractal) are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor. PMID:24520346

  16. Numerical Approaches about the Morphological Description Parameters for the Manganese Deposits on the Magnesite Ore Surface

    NASA Astrophysics Data System (ADS)

    Bayirli, Mehmet; Ozbey, Tuba

    2013-07-01

    Black deposits usually found at the surface of magnesite ore or limestone as well as red deposits in quartz veins are named as natural manganese dendrites. According to their geometrical structures, they may take variable fractal shapes. The characteristic origins of these morphologies have rarely been studied by means of numerical analyses. Hence, digital images of magnesite ore are taken from its surface with a scanner. These images are then converted to binary images in the form of 8 bits, bitmap format. As a next step, the morphological description parameters of manganese dendrites are computed by the way of scaling methods such as occupied fractions, fractal dimensions, divergent ratios, and critical exponents of scaling. The fractal dimension and the scaling range are made dependent on the fraction of the particles. Morphological description parameters can be determined according to the geometrical evaluation of the natural manganese dendrites which are formed independently from the process. The formation of manganese dendrites may also explain the stochastic selected process in the nature. These results therefore may be useful to understand the deposits in quartz vein parameters in geophysics.

  17. a New Improved Threshold Segmentation Method for Scanning Images of Reservoir Rocks Considering Pore Fractal Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua

    Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.

  18. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  19. Monitoring the soil degradation by Metastatistical Analysis

    NASA Astrophysics Data System (ADS)

    Oleschko, K.; Gaona, C.; Tarquis, A.

    2009-04-01

    The effectiveness of fractal toolbox to capture the critical behavior of soil structural patterns during the chemical and physical degradation was documented by our numerous experiments (Oleschko et al., 2008 a; 2008 b). The spatio-temporal dynamics of these patterns was measured and mapped with high precision in terms of fractal descriptors. All tested fractal techniques were able to detect the statistically significant differences in structure between the perfect spongy and massive patterns of uncultivated and sodium-saline agricultural soils, respectively. For instance, the Hurst exponent, extracted from the Chernozeḿ micromorphological images and from the time series of its physical and mechanical properties measured in situ, detected the roughness decrease (and therefore the increase in H - from 0.17 to 0.30 for images) derived from the loss of original structure complexity. The combined use of different fractal descriptors brings statistical precision into the quantification of natural system degradation and provides a means for objective soil structure comparison (Oleschko et al., 2000). The ability of fractal parameters to capture critical behavior and phase transition was documented for different contrasting situations, including from Andosols deforestation and erosion, to Vertisols high fructuring and consolidation. The Hurst exponent is used to measure the type of persistence and degree of complexity of structure dynamics. We conclude that there is an urgent need to select and adopt a standardized toolbox for fractal analysis and complexity measures in Earth Sciences. We propose to use the second-order (meta-) statistics as subtle measures of complexity (Atmanspacher et al., 1997). The high degree of correlation was documented between the fractal and high-order statistical descriptors (four central moments of stochastic variable distribution) used to the system heterogeneity and variability analysis. We proposed to call this combined fractal/statistical toolbox Metastatistical Analysis and recommend it to the projects directed to soil degradation monitoring. References: 1. Oleschko, K., B.S. Figueroa, M.E. Miranda, M.A. Vuelvas and E.R. Solleiro, Soil & Till. Res. 55, 43 (2000). 2. Oleschko, K., Korvin, G., Figueroa S. B., Vuelvas, M.A., Balankin, A., Flores L., Carreño, D. Fractal radar scattering from soil. Physical Review E.67, 041403, 2003. 3. Zamora-Castro S., Oleschko, K. Flores, L., Ventura, E. Jr., Parrot, J.-F., 2008. Fractal mapping of pore and solids attributes. Vadose Zone Journal, v. 7, Issue2: 473-492. 4. Oleschko, K., Korvin, G., Muñoz, A., Velásquez, J., Miranda, M.E., Carreon, D., Flores, L., Martínez, M., Velásquez-Valle, M., Brambilla, F., Parrot, J.-F. Ronquillo, G., 2008. Fractal mapping of soil moisture content from remote sensed multi-scale data. Nonlinear Proceses in Geophysics Journal, 15: 711-725. 5. Atmanspacher, H., Räth, Ch., Wiedenmann, G., 1997. Statistics and meta-statistics in the concept of complexity. Physica A, 234: 819-829.

  20. Fractal analysis of polyferric chloride-humic acid (PFC-HA) flocs in different topological spaces.

    PubMed

    Wang, Yili; Lu, Jia; Baiyu, Du; Shi, Baoyou; Wang, Dongsheng

    2009-01-01

    The fractal dimensions in different topological spaces of polyferric chloride-humic acid (PFC-HA) flocs, formed in flocculating different kinds of humic acids (HA) water at different initial pH (9.0, 7.0, 5.0) and PFC dosages, were calculated by effective density-maximum diameter, image analysis, and N2 absorption-desorption methods, respectively. The mass fractal dimensions (Df) of PFC-HA flocs were calculated by bi-logarithm relation of effective density with maximum diameter and Logan empirical equation. The Df value was more than 2.0 at initial pH of 7.0, which was 11% and 13% higher than those at pH 9.0 and 5.0, respectively, indicating the most compact flocs formed in flocculated HA water at initial pH of 7.0. The image analysis for those flocs indicates that after flocculating the HA water at initial pH greater than 7.0 with PFC flocculant, the fractal dimensions of D2 (logA vs. logdL) and D3 (logVsphere VS. logdL) of PFC-HA flocs decreased with the increase of PFC dosages, and PFC-HA flocs showed a gradually looser structure. At the optimum dosage of PFC, the D2 (logA vs. logdL) values of the flocs show 14%-43% difference with their corresponding Df, and they even had different tendency with the change of initial pH values. However, the D2 values of the flocs formed at three different initial pH in HA solution had a same tendency with the corresponding Dr. Based on fractal Frenkel-Halsey-Hill (FHH) adsorption and desorption equations, the pore surface fractal dimensions (Ds) for dried powders of PFC-HA flocs formed in HA water with initial pH 9.0 and 7.0 were all close to 2.9421, and the Ds values of flocs formed at initial pH 5.0 were less than 2.3746. It indicated that the pore surface fractal dimensions of PFC-HA flocs dried powder mainly show the irregularity from the mesopore-size distribution and marcopore-size distribution.

  1. Connotations of pixel-based scale effect in remote sensing and the modified fractal-based analysis method

    NASA Astrophysics Data System (ADS)

    Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu

    2017-06-01

    Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing data and it is helpful to analyze the observation scale from different aspects. This research will ultimately benefit for remote sensing data selection and application.

  2. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  3. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  4. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-04-01

    Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.

  5. Polarimetric Wavelet Fractal Remote Sensing Principles for Space Materials (Preprint)

    DTIC Science & Technology

    2012-06-04

    previously introduced 9-10, 28. The combination of polarimetry and wavelet-fractal analysis yields enhanced knowledge of the spatial-temporal-frequency...applications in situations that require analysis over very short time durations or where information is localized, and have been combined with polarimetry ...and D.B. Chenault, “Near Infrared Imaging Polarimetry ”, Proc. SPIE 4481, pp. 30-31, 2001. [8] A.B. Mahler, P. Smith, R. Chipman, G. Smith, N

  6. Fractal aggregates in tennis ball systems

    NASA Astrophysics Data System (ADS)

    Sabin, J.; Bandín, M.; Prieto, G.; Sarmiento, F.

    2009-09-01

    We present a new practical exercise to explain the mechanisms of aggregation of some colloids which are otherwise not easy to understand. We have used tennis balls to simulate, in a visual way, the aggregation of colloids under reaction-limited colloid aggregation (RLCA) and diffusion-limited colloid aggregation (DLCA) regimes. We have used the images of the cluster of balls, following Forrest and Witten's pioneering studies on the aggregation of smoke particles, to estimate their fractal dimension.

  7. Fractal dimension analysis of weight-bearing bones of rats during skeletal unloading

    NASA Technical Reports Server (NTRS)

    Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Sanhueza, A. I.; Yamauchi, M.

    2001-01-01

    Fractal analysis was used to quantify changes in trabecular bone induced through the use of a rat tail-suspension model to simulate microgravity-induced osteopenia. Fractal dimensions were estimated from digitized radiographs obtained from tail-suspended and ambulatory rats. Fifty 4-month-old male Sprague-Dawley rats were divided into groups of 24 ambulatory (control) and 26 suspended (test) animals. Rats of both groups were killed after periods of 1, 4, and 8 weeks. Femurs and tibiae were removed and radiographed with standard intraoral films and digitized using a flatbed scanner. Square regions of interest were cropped at proximal, middle, and distal areas of each bone. Fractal dimensions were estimated from slopes of regression lines fitted to circularly averaged plots of log power vs. log spatial frequency. The results showed that the computed fractal dimensions were significantly greater for images of trabecular bones from tail-suspended groups than for ambulatory groups (p < 0.01) at 1 week. Periods between 1 and 4 weeks likewise yielded significantly different estimates (p < 0.05), consistent with an increase in bone loss. In the tibiae, the proximal regions of the suspended group produced significantly greater fractal dimensions than other regions (p < 0.05), which suggests they were more susceptible to unloading. The data are consistent with other studies demonstrating osteopenia in microgravity environments and the regional response to skeletal unloading. Thus, fractal analysis could be a useful technique to evaluate the structural changes of bone.

  8. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  9. Investigation of non-premixed flame combustion characters in GO2/GH2 shear coaxial injectors using non-intrusive optical diagnostics

    NASA Astrophysics Data System (ADS)

    Dai, Jian; Yu, NanJia; Cai, GuoBiao

    2015-12-01

    Single-element combustor experiments are conducted for three shear coaxial geometry configuration injectors by using gaseous oxygen and gaseous hydrogen (GO2/GH2) as propellants. During the combustion process, several spatially and timeresolved non-intrusive optical techniques, such as OH planar laser induced fluorescence (PLIF), high speed imaging, and infrared imaging, are simultaneously employed to observe the OH radical concentration distribution, flame fluctuations, and temperature fields. The results demonstrate that the turbulent flow phenomenon of non-premixed flame exhibits a remarkable periodicity, and the mixing ratio becomes a crucial factor to influence the combustion flame length. The high speed and infrared images have a consistent temperature field trend. As for the OH-PLIF images, an intuitionistic local flame structure is revealed by single-shot instantaneous images. Furthermore, the means and standard deviations of OH radical intensity are acquired to provide statistical information regarding the flame, which may be helpful for validation of numerical simulations in future. Parameters of structure configurations, such as impinging angle and oxygen post thickness, play an important role in the reaction zone distribution. Based on a successful flame contour extraction method assembled with non-linear anisotropic diffusive filtering and variational level-set, it is possible to implement a fractal analysis to describe the fractal characteristics of the non-premixed flame contour. As a result, the flame front cannot be regarded as a fractal object. However, this turbulent process presents a self-similarity characteristic.

  10. A multifractal approach to space-filling recovery for PET quantification.

    PubMed

    Willaime, Julien M Y; Aboagye, Eric O; Tsoumpas, Charalampos; Turkheimer, Federico E

    2014-11-01

    A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV mean) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal and synthetic objects contaminated by partial volume effects (PVEs), validated on realistic (18)F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical (18)F-fluorothymidine PET test-retest dataset. TLA estimates were stable for a range of resolutions typical in PET oncology (4-6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV mean or TV measurements across imaging protocols. The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.

  11. Three-dimensional fractal analysis of 99mTc-MAA SPECT images in chronic thromboembolic pulmonary hypertension for evaluation of response to balloon pulmonary angioplasty: association with pulmonary arterial pressure.

    PubMed

    Maruoka, Yasuhiro; Nagao, Michinobu; Baba, Shingo; Isoda, Takuro; Kitamura, Yoshiyuki; Yamazaki, Yuzo; Abe, Koichiro; Sasaki, Masayuki; Abe, Kohtaro; Honda, Hiroshi

    2017-06-01

    Balloon pulmonary angioplasty (BPA) is used for inoperable chronic thromboembolic pulmonary hypertension (CTEPH), but its effect cannot be evaluated noninvasively. We devised a noninvasive quantitative index of response to BPA using three-dimensional fractal analysis (3D-FA) of technetium-99m-macroaggregated albumin (Tc-MAA) single-photon emission computed tomography (SPECT). Forty CTEPH patients who underwent pulmonary perfusion scintigraphy and mean pulmonary arterial pressure (mPAP) measurement by right heart catheterization before and after BPA were studied. The total uptake volume (TUV) in bilateral lungs was determined from maximum intensity projection Tc-MAA SPECT images. Fractal dimension was assessed by 3D-FA. Parameters were compared before and after BPA, and between patients with post-BPA mPAP more than 30 mmHg and less than or equal to 30 mmHg. Receiver operating characteristic analysis was carried out. BPA significantly improved TUV (595±204-885±214 ml, P<0.001) and reduced the laterality of uptake (238±147-135±131 ml, P<0.001). Patients with poor therapeutic response (post-BPA mPAP≥30 mmHg, n=16) showed a significantly smaller TUV increase (P=0.044) and a significantly greater post-BPA fractal dimension (P<0.001) than the low-mPAP group. Fractal dimension correlated with mPAP values before and after BPA (P=0.013 and 0.001, respectively). A post-BPA fractal dimension threshold of 2.4 distinguished between BPA success and failure with 75% sensitivity, 79% specificity, 78% accuracy, and area under the curve of 0.85. 3D-FA using Tc-MAA SPECT pulmonary perfusion scintigraphy enables a noninvasive evaluation of the response of CTEPH patients to BPA.

  12. A fractal derivative model for the characterization of anomalous diffusion in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.

    2016-10-01

    Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the perspective of computational accuracy and efficiency.

  13. Robust estimation of fractal measures for characterizing the structural complexity of the human brain: optimization and reproducibility

    PubMed Central

    Goñi, Joaquín; Sporns, Olaf; Cheng, Hu; Aznárez-Sanado, Maite; Wang, Yang; Josa, Santiago; Arrondo, Gonzalo; Mathews, Vincent P; Hummer, Tom A; Kronenberger, William G; Avena-Koenigsberger, Andrea; Saykin, Andrew J.; Pastor, María A.

    2013-01-01

    High-resolution isotropic three-dimensional reconstructions of human brain gray and white matter structures can be characterized to quantify aspects of their shape, volume and topological complexity. In particular, methods based on fractal analysis have been applied in neuroimaging studies to quantify the structural complexity of the brain in both healthy and impaired conditions. The usefulness of such measures for characterizing individual differences in brain structure critically depends on their within-subject reproducibility in order to allow the robust detection of between-subject differences. This study analyzes key analytic parameters of three fractal-based methods that rely on the box-counting algorithm with the aim to maximize within-subject reproducibility of the fractal characterizations of different brain objects, including the pial surface, the cortical ribbon volume, the white matter volume and the grey matter/white matter boundary. Two separate datasets originating from different imaging centers were analyzed, comprising, 50 subjects with three and 24 subjects with four successive scanning sessions per subject, respectively. The reproducibility of fractal measures was statistically assessed by computing their intra-class correlations. Results reveal differences between different fractal estimators and allow the identification of several parameters that are critical for high reproducibility. Highest reproducibility with intra-class correlations in the range of 0.9–0.95 is achieved with the correlation dimension. Further analyses of the fractal dimensions of parcellated cortical and subcortical gray matter regions suggest robustly estimated and region-specific patterns of individual variability. These results are valuable for defining appropriate parameter configurations when studying changes in fractal descriptors of human brain structure, for instance in studies of neurological diseases that do not allow repeated measurements or for disease-course longitudinal studies. PMID:23831414

  14. On the fractal morphology of combustion-generated soot aggregates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koylu, U.O.

    1995-12-31

    The fractal properties of soot aggregates were investigated using ex-situ and in-situ experimental methods as well as computer simulations. Ex-situ experiments involved thermophoretic sampling and analysis by transmission electron microscopy (TEM), while in-situ measurements employed angular static light scattering and data inversion based on Rayleigh-Debye-Gans (RDG) approximation. Computer simulations used a sequential algorithm which mimics mass fractal-like structures. So from a variety of hydrocarbon-fueled laminar and turbulent nonpremixed flame environments were considered in the present study. The TEM analysis of projected soot images sampled from fuel-rich conditions of buoyant and weakly-buoyant laminar flames indicated that the fractal dimension of sootmore » was relatively independent of position in flames, fuel type and flame condition. These measurements yielded an average fractal dimension of 1.8, although other structure parameters such as the primary particle diameters and number of primary particles in aggregates had wide range of values. Fractal prefactor (lacunarity) was also measured for soot sampled from the fuel-lean conditions of turbulent flames, considering the actual morphology by tilting the samples during TEM analysis. These measurements yielded a fractal dimension of 1.65 and a lacunarity of 8.5, with experimental uncertainties (95% confidence) of 0.08 and 0.5, respectively. Relationships between the actual and projected structure properties of soot were also developed by combining TEM observations with numerical simulations. Practical approximate formulae were suggested to find radius of gyration of an aggregate from its maximum dimension, and number of primary particles in an aggregate from projected area. Finally, the fractal dimension and lacunarity of soot were obtained using light scattering for the same conditions of the above TEM measurements.« less

  15. The Application of Fractal and Multifractal Theory in Hydraulic-Flow-Unit Characterization and Permeability Estimation

    NASA Astrophysics Data System (ADS)

    Chen, X.; Yao, G.; Cai, J.

    2017-12-01

    Pore structure characteristics are important factors in influencing the fluid transport behavior of porous media, such as pore-throat ratio, pore connectivity and size distribution, moreover, wettability. To accurately characterize the diversity of pore structure among HFUs, five samples selected from different HFUs (porosities are approximately equal, however permeability varies widely) were chosen to conduct micro-computerized tomography test to acquire direct 3D images of pore geometries and to perform mercury injection experiments to obtain the pore volume-radii distribution. To characterize complex and high nonlinear pore structure of all samples, three classic fractal geometry models were applied. Results showed that each HFU has similar box-counting fractal dimension and generalized fractal dimension in the number-area model, but there are significant differences in multifractal spectrums. In the radius-volume model, there are three obvious linear segments, corresponding to three fractal dimension values, and the middle one is proved as the actual fractal dimension according to the maximum radius. In the number-radius model, the spherical-pore size distribution extracted by maximum ball algorithm exist a decrease in the number of small pores compared with the fractal power rate rather than the traditional linear law. Among the three models, only multifractal analysis can classify the HFUs accurately. Additionally, due to the tightness and low-permeability in reservoir rocks, connate water film existing in the inner surface of pore channels commonly forms bound water. The conventional model which is known as Yu-Cheng's model has been proved to be typically not applicable. Considering the effect of irreducible water saturation, an improved fractal permeability model was also deduced theoretically. The comparison results showed that the improved model can be applied to calculate permeability directly and accurately in such unconventional rocks.

  16. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  17. Fractal lacunarity of trabecular bone and magnetic resonance imaging: New perspectives for osteoporotic fracture risk assessment

    PubMed Central

    Zaia, Annamaria

    2015-01-01

    Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest fractal analysis as a promising approach to quantify morpho-functional changes in both aging and pathology. In this particular context, fractal lacunarity seems to be the proper tool to characterize TBA texture as it is able to describe both discontinuity of bone network and sizes of bone marrow spaces, whose changes are an index of bone fracture risk. In this paper, an original method of MRI texture analysis, based on TBA fractal lacunarity is described and discussed in the light of new perspectives for early diagnosis of osteoporotic fractures. PMID:25793162

  18. Integrated quantitative fractal polarimetric analysis of monolayer lung cancer cells

    NASA Astrophysics Data System (ADS)

    Shrestha, Suman; Zhang, Lin; Quang, Tri; Farrahi, Tannaz; Narayan, Chaya; Deshpande, Aditi; Na, Ying; Blinzler, Adam; Ma, Junyu; Liu, Bo; Giakos, George C.

    2014-05-01

    Digital diagnostic pathology has become one of the most valuable and convenient advancements in technology over the past years. It allows us to acquire, store and analyze pathological information from the images of histological and immunohistochemical glass slides which are scanned to create digital slides. In this study, efficient fractal, wavelet-based polarimetric techniques for histological analysis of monolayer lung cancer cells will be introduced and different monolayer cancer lines will be studied. The outcome of this study indicates that application of fractal, wavelet polarimetric principles towards the analysis of squamous carcinoma and adenocarcinoma cancer cell lines may be proved extremely useful in discriminating among healthy and lung cancer cells as well as differentiating among different lung cancer cells.

  19. Electromagnetic backscattering from one-dimensional drifting fractal sea surface II: Electromagnetic backscattering model

    NASA Astrophysics Data System (ADS)

    Tao, Xie; William, Perrie; Shang-Zhuo, Zhao; He, Fang; Wen-Jin, Yu; Yi-Jun, He

    2016-07-01

    Sea surface current has a significant influence on electromagnetic (EM) backscattering signals and may constitute a dominant synthetic aperture radar (SAR) imaging mechanism. An effective EM backscattering model for a one-dimensional drifting fractal sea surface is presented in this paper. This model is used to simulate EM backscattering signals from the drifting sea surface. Numerical results show that ocean currents have a significant influence on EM backscattering signals from the sea surface. The normalized radar cross section (NRCS) discrepancies between the model for a coupled wave-current fractal sea surface and the model for an uncoupled fractal sea surface increase with the increase of incidence angle, as well as with increasing ocean currents. Ocean currents that are parallel to the direction of the wave can weaken the EM backscattering signal intensity, while the EM backscattering signal is intensified by ocean currents propagating oppositely to the wave direction. The model presented in this paper can be used to study the SAR imaging mechanism for a drifting sea surface. Project supported by the National Natural Science Foundation of China (Grant No. 41276187), the Global Change Research Program of China (Grant No. 2015CB953901), the Priority Academic Program Development of Jiangsu Higher Education Institutions, China, the Program for the Innovation Research and Entrepreneurship Team in Jiangsu Province, China, the Canadian Program on Energy Research and Development, and the Canadian World Class Tanker Safety Service Program.

  20. Fractal Characteristics of the Pore Network in Diatomites Using Mercury Porosimetry and Image Analysis

    NASA Astrophysics Data System (ADS)

    Stańczak, Grażyna; Rembiś, Marek; Figarska-Warchoł, Beata; Toboła, Tomasz

    The complex pore space considerably affects the unique properties of diatomite and its significant potential for many industrial applications. The pore network in the diatomite from the Lower Miocene strata of the Skole nappe (the Jawornik deposit, SE Poland) has been investigated using a fractal approach. The fractal dimension of the pore-space volume was calculated using the Menger sponge as a model of a porous body and the mercury porosimetry data in a pore-throat diameter range between 10,000 and 10 nm. Based on the digital analyses of the two-dimensional images from thin sections taken under a scanning electron microscope at the backscattered electron mode at different magnifications, the authors tried to quantify the pore spaces of the diatomites using the box counting method. The results derived from the analyses of the pore-throat diameter distribution using mercury porosimetry have revealed that the pore space of the diatomite has the bifractal structure in two separated ranges of the pore-throat diameters considerably smaller than the pore-throat sizes corresponding to threshold pressures. Assuming that the fractal dimensions identified for the ranges of the smaller pore-throat diameters characterize the overall pore-throat network in the Jawornik diatomite, we can set apart the distribution of the pore-throat volume (necks) and the pore volume from the distribution of the pore-space volume (pores and necks together).

  1. Bridging Three Orders of Magnitude: Multiple Scattered Waves Sense Fractal Microscopic Structures via Dispersion

    NASA Astrophysics Data System (ADS)

    Lambert, Simon A.; Näsholm, Sven Peter; Nordsletten, David; Michler, Christian; Juge, Lauriane; Serfaty, Jean-Michel; Bilston, Lynne; Guzina, Bojan; Holm, Sverre; Sinkus, Ralph

    2015-08-01

    Wave scattering provides profound insight into the structure of matter. Typically, the ability to sense microstructure is determined by the ratio of scatterer size to probing wavelength. Here, we address the question of whether macroscopic waves can report back the presence and distribution of microscopic scatterers despite several orders of magnitude difference in scale between wavelength and scatterer size. In our analysis, monosized hard scatterers 5 μ m in radius are immersed in lossless gelatin phantoms to investigate the effect of multiple reflections on the propagation of shear waves with millimeter wavelength. Steady-state monochromatic waves are imaged in situ via magnetic resonance imaging, enabling quantification of the phase velocity at a voxel size big enough to contain thousands of individual scatterers, but small enough to resolve the wavelength. We show in theory, experiments, and simulations that the resulting coherent superposition of multiple reflections gives rise to power-law dispersion at the macroscopic scale if the scatterer distribution exhibits apparent fractality over an effective length scale that is comparable to the probing wavelength. Since apparent fractality is naturally present in any random medium, microstructure can thereby leave its fingerprint on the macroscopically quantifiable power-law exponent. Our results are generic to wave phenomena and carry great potential for sensing microstructure that exhibits intrinsic fractality, such as, for instance, vasculature.

  2. Fracture Surface Morphology and Impact Strength of Cellulose/PLA Composites.

    PubMed

    Gao, Honghong; Qiang, Tao

    2017-06-07

    Polylactide (PLA)-based composite materials reinforced with ball-milled celluloses were manufactured by extrusion blending followed by injection molding. Their surface morphology from impact fracture were imaged with scanning electron microscopy (SEM) and investigated by calculating their fractal dimensions. Then, linear regression was used to explore the relationship between fractal dimension and impact strength of the resultant cellulose/PLA composite materials. The results show that filling the ball-milled celluloses into PLA can improve the impact toughness of PLA by a minimum of 38%. It was demonstrated that the fracture pattern of the cellulose/PLA composite materials is different from that of pristine PLA. For the resultant composite materials, the fractal dimension of the impact fractured surfaces increased with increasing filling content and decreasing particle size of the ball-milled cellulose particles. There were highly positive correlations between fractal dimension of the fractured surfaces and impact strength of the cellulose/PLA composites. However, the linearity between fractal dimension and impact strength were different for the different methods, due to their different R-squared values. The approach presented in this work will help to understand the structure-property relationships of composite materials from a new perspective.

  3. Fracture Surface Morphology and Impact Strength of Cellulose/PLA Composites

    PubMed Central

    Gao, Honghong; Qiang, Tao

    2017-01-01

    Polylactide (PLA)-based composite materials reinforced with ball-milled celluloses were manufactured by extrusion blending followed by injection molding. Their surface morphology from impact fracture were imaged with scanning electron microscopy (SEM) and investigated by calculating their fractal dimensions. Then, linear regression was used to explore the relationship between fractal dimension and impact strength of the resultant cellulose/PLA composite materials. The results show that filling the ball-milled celluloses into PLA can improve the impact toughness of PLA by a minimum of 38%. It was demonstrated that the fracture pattern of the cellulose/PLA composite materials is different from that of pristine PLA. For the resultant composite materials, the fractal dimension of the impact fractured surfaces increased with increasing filling content and decreasing particle size of the ball-milled cellulose particles. There were highly positive correlations between fractal dimension of the fractured surfaces and impact strength of the cellulose/PLA composites. However, the linearity between fractal dimension and impact strength were different for the different methods, due to their different R-squared values. The approach presented in this work will help to understand the structure–property relationships of composite materials from a new perspective. PMID:28772983

  4. Fractality and growth of He bubbles in metals

    NASA Astrophysics Data System (ADS)

    Kajita, Shin; Ito, Atsushi M.; Ohno, Noriyasu

    2017-08-01

    Pinholes are formed on surfaces of metals by the exposure to helium plasmas, and they are regarded as the initial process of the growth of fuzzy nanostructures. In this study, number density of the pinholes is investigated in detail from the scanning electron microscope (SEM) micrographs of tungsten and tantalum exposed to the helium plasmas. A power law relation was identified between the number density and the size of pinholes. From the slope and the region where the power law was satisfied, the fractal dimension D and smin, which characterize the SEM images, are deduced. Parametric dependences and material dependence of D and smin are revealed. To explain the fractality, simple Monte-Carlo simulations including random walks of He atoms and absorption on bubble was introduced. It is shown that the initial position of the random walk is one of the key factors to deduce the fractality. The results indicated that new nucleations of bubbles are necessary to reproduce the number-density distribution of bubbles.

  5. Microvascular fractal dimension predicts prognosis and response to chemotherapy in glioblastoma: an automatic image analysis study.

    PubMed

    Chen, Cong; He, Zhi-Cheng; Shi, Yu; Zhou, Wenchao; Zhang, Xia; Xiao, Hua-Liang; Wu, Hai-Bo; Yao, Xiao-Hong; Luo, Wan-Chun; Cui, You-Hong; Bao, Shideng; Kung, Hsiang-Fu; Bian, Xiu-Wu; Ping, Yi-Fang

    2018-05-15

    The microvascular profile has been included in the WHO glioma grading criteria. Nevertheless, microvessels in gliomas of the same WHO grade, e.g., WHO IV glioblastoma (GBM), exhibit heterogeneous and polymorphic morphology, whose possible clinical significance remains to be determined. In this study, we employed a fractal geometry-derived parameter, microvascular fractal dimension (mvFD), to quantify microvessel complexity and developed a home-made macro in Image J software to automatically determine mvFD from the microvessel-stained immunohistochemical images of GBM. We found that mvFD effectively quantified the morphological complexity of GBM microvasculature. Furthermore, high mvFD favored the survival of GBM patients as an independent prognostic indicator and predicted a better response to chemotherapy of GBM patients. When investigating the underlying relations between mvFD and tumor growth by deploying Ki67/mvFD as an index for microvasculature-normalized tumor proliferation, we discovered an inverse correlation between mvFD and Ki67/mvFD. Furthermore, mvFD inversely correlated with the expressions of a glycolytic marker, LDHA, which indicated poor prognosis of GBM patients. Conclusively, we developed an automatic approach for mvFD measurement, and demonstrated that mvFD could predict the prognosis and response to chemotherapy of GBM patients.

  6. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  7. Mapping Transient Hyperventilation Induced Alterations with Estimates of the Multi-Scale Dynamics of BOLD Signal.

    PubMed

    Kiviniemi, Vesa; Remes, Jukka; Starck, Tuomo; Nikkinen, Juha; Haapea, Marianne; Silven, Olli; Tervonen, Osmo

    2009-01-01

    Temporal blood oxygen level dependent (BOLD) contrast signals in functional MRI during rest may be characterized by power spectral distribution (PSD) trends of the form 1/f(alpha). Trends with 1/f characteristics comprise fractal properties with repeating oscillation patterns in multiple time scales. Estimates of the fractal properties enable the quantification of phenomena that may otherwise be difficult to measure, such as transient, non-linear changes. In this study it was hypothesized that the fractal metrics of 1/f BOLD signal trends can map changes related to dynamic, multi-scale alterations in cerebral blood flow (CBF) after a transient hyperventilation challenge. Twenty-three normal adults were imaged in a resting-state before and after hyperventilation. Different variables (1/f trend constant alpha, fractal dimension D(f), and, Hurst exponent H) characterizing the trends were measured from BOLD signals. The results show that fractal metrics of the BOLD signal follow the fractional Gaussian noise model, even during the dynamic CBF change that follows hyperventilation. The most dominant effect on the fractal metrics was detected in grey matter, in line with previous hyperventilation vaso-reactivity studies. The alpha was able to differentiate also blood vessels from grey matter changes. D(f) was most sensitive to grey matter. H correlated with default mode network areas before hyperventilation but this pattern vanished after hyperventilation due to a global increase in H. In the future, resting-state fMRI combined with fractal metrics of the BOLD signal may be used for analyzing multi-scale alterations of cerebral blood flow.

  8. Histomorphometric, fractal and lacunarity comparative analysis of sheep (Ovis aries), goat (Capra hircus) and roe deer (Capreolus capreolus) compact bone samples.

    PubMed

    Gudea, A I; Stefan, A C

    2013-08-01

    Quantitative and qualitative studies dealing with histomorphometry of the bone tissue play a new role in modern legal medicine/forensic medicine and archaeozoology nowadays. This study deals with the differences found in case of humerus and metapodial bones of recent sheep (Ovis aries), goat (Capra hircus) and roedeer (Capreolus capreolus) specimens, both from a qualitative point of view, but mainly from a quantitative perspective. A novel perspective given by the fractal analysis performed on the digital histological images is approached. This study shows that the qualitative assessment may not be a reliable one due to the close resemblance of the structures. From the quantitative perspective (several measurements performed on osteonal units and statistical processing of data),some of the elements measured show significant differences among 3 species(the primary osteonal diameter, etc.). The fractal analysis and the lacunarity of the images show a great deal of potential, proving that this type of analysis can be of great help in the separation of the material from this perspective.

  9. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  10. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  11. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  12. Research on cloud background infrared radiation simulation based on fractal and statistical data

    NASA Astrophysics Data System (ADS)

    Liu, Xingrun; Xu, Qingshan; Li, Xia; Wu, Kaifeng; Dong, Yanbing

    2018-02-01

    Cloud is an important natural phenomenon, and its radiation causes serious interference to infrared detector. Based on fractal and statistical data, a method is proposed to realize cloud background simulation, and cloud infrared radiation data field is assigned using satellite radiation data of cloud. A cloud infrared radiation simulation model is established using matlab, and it can generate cloud background infrared images for different cloud types (low cloud, middle cloud, and high cloud) in different months, bands and sensor zenith angles.

  13. Fusion of multiscale wavelet-based fractal analysis on retina image for stroke prediction.

    PubMed

    Che Azemin, M Z; Kumar, Dinesh K; Wong, T Y; Wang, J J; Kawasaki, R; Mitchell, P; Arjunan, Sridhar P

    2010-01-01

    In this paper, we present a novel method of analyzing retinal vasculature using Fourier Fractal Dimension to extract the complexity of the retinal vasculature enhanced at different wavelet scales. Logistic regression was used as a fusion method to model the classifier for 5-year stroke prediction. The efficacy of this technique has been tested using standard pattern recognition performance evaluation, Receivers Operating Characteristics (ROC) analysis and medical prediction statistics, odds ratio. Stroke prediction model was developed using the proposed system.

  14. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension, Markov random field features, and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-10-01

    In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.

  15. A multifractal approach to space-filling recovery for PET quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willaime, Julien M. Y., E-mail: julien.willaime@siemens.com; Aboagye, Eric O.; Tsoumpas, Charalampos

    2014-11-01

    Purpose: A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). Methods: A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV {sub mean}) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal andmore » synthetic objects contaminated by partial volume effects (PVEs), validated on realistic {sup 18}F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical {sup 18}F-fluorothymidine PET test–retest dataset. Results: TLA estimates were stable for a range of resolutions typical in PET oncology (4–6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV {sub mean} or TV measurements across imaging protocols. Conclusions: The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.« less

  16. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  17. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  18. Cell type classifiers for breast cancer microscopic images based on fractal dimension texture analysis of image color layers.

    PubMed

    Jitaree, Sirinapa; Phinyomark, Angkoon; Boonyaphiphat, Pleumjit; Phukpattaranont, Pornchai

    2015-01-01

    Having a classifier of cell types in a breast cancer microscopic image (BCMI), obtained with immunohistochemical staining, is required as part of a computer-aided system that counts the cancer cells in such BCMI. Such quantitation by cell counting is very useful in supporting decisions and planning of the medical treatment of breast cancer. This study proposes and evaluates features based on texture analysis by fractal dimension (FD), for the classification of histological structures in a BCMI into either cancer cells or non-cancer cells. The cancer cells include positive cells (PC) and negative cells (NC), while the normal cells comprise stromal cells (SC) and lymphocyte cells (LC). The FD feature values were calculated with the box-counting method from binarized images, obtained by automatic thresholding with Otsu's method of the grayscale images for various color channels. A total of 12 color channels from four color spaces (RGB, CIE-L*a*b*, HSV, and YCbCr) were investigated, and the FD feature values from them were used with decision tree classifiers. The BCMI data consisted of 1,400, 1,200, and 800 images with pixel resolutions 128 × 128, 192 × 192, and 256 × 256, respectively. The best cross-validated classification accuracy was 93.87%, for distinguishing between cancer and non-cancer cells, obtained using the Cr color channel with window size 256. The results indicate that the proposed algorithm, based on fractal dimension features extracted from a color channel, performs well in the automatic classification of the histology in a BCMI. This might support accurate automatic cell counting in a computer-assisted system for breast cancer diagnosis. © Wiley Periodicals, Inc.

  19. Rheological and fractal characteristics of unconditioned and conditioned water treatment residuals.

    PubMed

    Dong, Y J; Wang, Y L; Feng, J

    2011-07-01

    The rheological and fractal characteristics of raw (unconditioned) and conditioned water treatment residuals (WTRs) were investigated in this study. Variations in morphology, size, and image fractal dimensions of the flocs/aggregates in these WTR systems with increasing polymer doses were analyzed. The results showed that when the raw WTRs were conditioned with the polymer CZ8688, the optimum polymer dosage was observed at 24 kg/ton dry sludge. The average diameter of irregularly shaped flocs/aggregates in the WTR suspensions increased from 42.54 μm to several hundred micrometers with increasing polymer doses. Furthermore, the aggregates in the conditioned WTR system displayed boundary/surface and mass fractals. At the optimum polymer dosage, the aggregates formed had a volumetric average diameter of about 820.7 μm, with a one-dimensional fractal dimension of 1.01 and a mass fractal dimension of 2.74 on the basis of the image analysis. Rheological tests indicated that the conditioned WTRs at the optimum polymer dosage showed higher levels of shear-thinning behavior than the raw WTRs. Variations in the limiting viscosity (η(∞)) of conditioned WTRs with sludge content could be described by a linear equation, which were different from the often-observed empirical exponential relationship for most municipal sludge. With increasing temperature, the η(∞) of the raw WTRs decreased more rapidly than that of the raw WTRs. Good fitting results for the relationships between lgη(∞)∼T using the Arrhenius equation indicate that the WTRs had a much higher activation energy for viscosity of about 17.86-26.91 J/mol compared with that of anaerobic granular sludge (2.51 J/mol) (Mu and Yu, 2006). In addition, the Bingham plastic model adequately described the rheological behavior of the conditioned WTRs, whereas the rheology of the raw WTRs fit the Herschel-Bulkley model well at only certain sludge contents. Considering the good power-law relationships between the limiting viscosity and sludge content of the conditioned WTRs, their mass fractal dimensions were calculated through the models proposed by Shih et al. (1990), which were 2.48 for these conditioned WTR aggregates. The results demonstrate that conditioned WTRs behave like weak-link flocs/aggregates. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Effect of angle of deposition on the Fractal properties of ZnO thin film surface

    NASA Astrophysics Data System (ADS)

    Yadav, R. P.; Agarwal, D. C.; Kumar, Manvendra; Rajput, Parasmani; Tomar, D. S.; Pandey, S. N.; Priya, P. K.; Mittal, A. K.

    2017-09-01

    Zinc oxide (ZnO) thin films were prepared by atom beam sputtering at various deposition angles in the range of 20-75°. The deposited thin films were examined by glancing angle X-ray diffraction and atomic force microscopy (AFM). Scaling law analysis was performed on AFM images to show that the thin film surfaces are self-affine. Fractal dimension of each of the 256 vertical sections along the fast scan direction of a discretized surface, obtained from the AFM height data, was estimated using the Higuchi's algorithm. Hurst exponent was computed from the fractal dimension. The grain sizes, as determined by applying self-correlation function on AFM micrographs, varied with the deposition angle in the same manner as the Hurst exponent.

  1. Spatiotemporal Characterization of a Fibrin Clot Using Quantitative Phase Imaging

    PubMed Central

    Gannavarpu, Rajshekhar; Bhaduri, Basanta; Tangella, Krishnarao; Popescu, Gabriel

    2014-01-01

    Studying the dynamics of fibrin clot formation and its morphology is an important problem in biology and has significant impact for several scientific and clinical applications. We present a label-free technique based on quantitative phase imaging to address this problem. Using quantitative phase information, we characterized fibrin polymerization in real-time and present a mathematical model describing the transition from liquid to gel state. By exploiting the inherent optical sectioning capability of our instrument, we measured the three-dimensional structure of the fibrin clot. From this data, we evaluated the fractal nature of the fibrin network and extracted the fractal dimension. Our non-invasive and speckle-free approach analyzes the clotting process without the need for external contrast agents. PMID:25386701

  2. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  3. X-ray imaging of aggregation in silica and zeolitic precursors

    NASA Astrophysics Data System (ADS)

    Morrison, Graeme R.; Browne, Michael T.; Beelen, Theo P. M.; van Garderen, Harold F.

    1993-01-01

    The resolution available in the King's College London scanning transmission x-ray microscope (STXM) can be exploited to study aggregate structures over a length scale from 100 nm to 10 micrometers that overlaps with and complements that available from small-angle x-ray scattering (SAXS) data. It is then possible to use these combined sets of data to test between different growth models for the aggregates, using the fractal dimension of the structures as a way of distinguishing the different models. In this paper we show some of the first transmission x-ray images taken of silica gels and zeolite precursors, materials that are of great practical and economic importance for certain selective catalytic processes in the chemical industry, and yet for which there is still only limited understanding of the complicated processes involved in their preparation. These images reveal clearly the fractal aggregates that are formed by the specimens.

  4. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  5. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  6. Association of the Fractal Dimension of Retinal Arteries and Veins with Quantitative Brain MRI Measures in HIV-Infected and Uninfected Women

    PubMed Central

    Crystal, Howard A.; Holman, Susan; Lui, Yvonne W.; Baird, Alison E.; Yu, Hua; Klein, Ronald; Rojas-Soto, Diana Marcella; Gustafson, Deborah R.; Stebbins, Glenn T.

    2016-01-01

    Objective The fractal dimension of retinal arteries and veins is a measure of the complexity of the vascular tree. We hypothesized that retinal fractal dimension would be associated with brain volume and white matter integrity in HIV-infected women. Design Nested case-control within longitudinal cohort study. Methods Women were recruited from the Brooklyn site of the Women’s Interagency HIV study (WIHS); 34 HIV-infected and 21 HIV-uninfected women with analyzable MRIs and retinal photographs were included. Fractal dimension was determined using the SIVA software program on skeletonized retinal images. The relationship between predictors (retinal vascular measures) and outcomes (quantitative MRI measures) were analyzed with linear regression models. All models included age, intracranial volume, and both arterial and venous fractal dimension. Some models were adjusted for blood pressure, race/ethnicity, and HIV-infection. Results The women were 45.6 ± 7.3 years of age. Higher arterial dimension was associated with larger cortical volumes, but higher venous dimension was associated with smaller cortical volumes. In fully adjusted models, venous dimension was significantly associated with fractional anisotropy (standardized β = -0.41, p = 0.009) and total gray matter volume (β = -0.24, p = 0.03), and arterial dimension with mean diffusivity (β = -0.33,.p = 0.04) and fractional anisotropy (β = 0.34, p = 0.03). HIV-infection was not associated with any retinal or MRI measure. Conclusions Higher venous fractal dimension was associated with smaller cortical volumes and lower fractional anisotropy, whereas higher arterial fractal dimension was associated with the opposite patterns. Longitudinal studies are needed to validate this finding. PMID:27158911

  7. A New Fractal Model of Chromosome and DNA Processes

    NASA Astrophysics Data System (ADS)

    Bouallegue, K.

    Dynamic chromosome structure remains unknown. Can fractals and chaos be used as new tools to model, identify and generate a structure of chromosomes?Fractals and chaos offer a rich environment for exploring and modeling the complexity of nature. In a sense, fractal geometry is used to describe, model, and analyze the complex forms found in nature. Fractals have also been widely not only in biology but also in medicine. To this effect, a fractal is considered an object that displays self-similarity under magnification and can be constructed using a simple motif (an image repeated on ever-reduced scales).It is worth noting that the problem of identifying a chromosome has become a challenge to find out which one of the models it belongs to. Nevertheless, the several different models (a hierarchical coiling, a folded fiber, and radial loop) have been proposed for mitotic chromosome but have not reached a dynamic model yet.This paper is an attempt to solve topological problems involved in the model of chromosome and DNA processes. By combining the fractal Julia process and the numerical dynamical system, we have finally found out four main points. First, we have developed not only a model of chromosome but also a model of mitosis and one of meiosis. Equally important, we have identified the centromere position through the numerical model captured below. More importantly, in this paper, we have discovered the processes of the cell divisions of both mitosis and meiosis. All in all, the results show that this work could have a strong impact on the welfare of humanity and can lead to a cure of genetic diseases.

  8. Fractal Analysis of Visual Search Activity for Mass Detection During Mammographic Screening

    DOE PAGES

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; ...

    2017-02-21

    Purpose: The objective of this study was to assess the complexity of human visual search activity during mammographic screening using fractal analysis and to investigate its relationship with case and reader characteristics. Methods: The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus data for this study. The fractal dimension of the readers’ visual scanning patternsmore » was computed with the Minkowski–Bouligand box-counting method and used as a measure of gaze complexity. Individual factor and group-based interaction ANOVA analysis was performed to study the association between fractal dimension, case pathology, breast density, and reader experience level. The consistency of the observed trends depending on gaze data representation was also examined. Results: Case pathology, breast density, reader experience level, and individual reader differences are all independent predictors of the visual scanning pattern complexity when screening for breast cancer. No higher order effects were found to be significant. Conclusions: Fractal characterization of visual search behavior during mammographic screening is dependent on case properties and image reader characteristics.« less

  9. Digenetic Changes in Macro- to Nano-Scale Porosity in the St. Peter Sandstone:L An (Ultra) Small Angle Neutron Scattering and Backscattered Electron Imagining Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anovitz, Lawrence; Cole, David; Rother, Gernot

    2013-01-01

    Small- and Ultra-Small Angle Neutron Scattering (SANS and USANS) provide powerful tools for quantitative analysis of porous rocks, yielding bulk statistical information over a wide range of length scales. This study utilized (U)SANS to characterize shallowly buried quartz arenites from the St. Peter Sandstone. Backscattered electron imaging was also used to extend the data to larger scales. These samples contain significant volumes of large-scale porosity, modified by quartz overgrowths, and neutron scattering results show significant sub-micron porosity. While previous scattering data from sandstones suggest scattering is dominated by surface fractal behavior over many orders of magnitude, careful analysis of ourmore » data shows both fractal and pseudo-fractal behavior. The scattering curves are composed of subtle steps, modeled as polydispersed assemblages of pores with log-normal distributions. However, in some samples an additional surface-fractal overprint is present, while in others there is no such structure, and scattering can be explained by summation of non-fractal structures. Combined with our work on other rock-types, these data suggest that microporosity is more prevalent, and may play a much more important role than previously thought in fluid/rock interactions.« less

  10. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    PubMed

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  11. Multiple scaling power in liquid gallium under pressure conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Renfeng; Wang, Luhong; Li, Liangliang

    Generally, a single scaling exponent, Df, can characterize the fractal structures of metallic glasses according to the scaling power law. However, when the scaling power law is applied to liquid gallium upon compression, the results show multiple scaling exponents and the values are beyond 3 within the first four coordination spheres in real space, indicating that the power law fails to describe the fractal feature in liquid gallium. The increase in the first coordination number with pressure leads to the fact that first coordination spheres at different pressures are not similar to each other in a geometrical sense. This multiplemore » scaling power behavior is confined within a correlation length of ξ ≈ 14–15 Å at applied pressure according to decay of G(r) in liquid gallium. Beyond this length the liquid gallium system could roughly be viewed as homogeneous, as indicated by the scaling exponent, Ds, which is close to 3 beyond the first four coordination spheres.« less

  12. Multiscale Computer Simulation of Failure in Aerogels

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2008-01-01

    Aerogels have been of interest to the aerospace community primarily for their thermal properties, notably their low thermal conductivities. While such gels are typically fragile, recent advances in the application of conformal polymer layers to these gels has made them potentially useful as lightweight structural materials as well. We have previously performed computer simulations of aerogel thermal conductivity and tensile and compressive failure, with results that are in qualitative, and sometimes quantitative, agreement with experiment. However, recent experiments in our laboratory suggest that gels having similar densities may exhibit substantially different properties. In this work, we extend our original diffusion limited cluster aggregation (DLCA) model for gel structure to incorporate additional variation in DLCA simulation parameters, with the aim of producing DLCA clusters of similar densities that nevertheless have different fractal dimension and secondary particle coordination. We perform particle statics simulations of gel strain on these clusters, and consider the effects of differing DLCA simulation conditions, and the resultant differences in fractal dimension and coordination, on gel strain properties.

  13. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  14. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  15. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  16. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  17. Quantification of ultrasonic texture intra-heterogeneity via volumetric stochastic modeling for tissue characterization.

    PubMed

    Al-Kadi, Omar S; Chung, Daniel Y F; Carlisle, Robert C; Coussios, Constantin C; Noble, J Alison

    2015-04-01

    Intensity variations in image texture can provide powerful quantitative information about physical properties of biological tissue. However, tissue patterns can vary according to the utilized imaging system and are intrinsically correlated to the scale of analysis. In the case of ultrasound, the Nakagami distribution is a general model of the ultrasonic backscattering envelope under various scattering conditions and densities where it can be employed for characterizing image texture, but the subtle intra-heterogeneities within a given mass are difficult to capture via this model as it works at a single spatial scale. This paper proposes a locally adaptive 3D multi-resolution Nakagami-based fractal feature descriptor that extends Nakagami-based texture analysis to accommodate subtle speckle spatial frequency tissue intensity variability in volumetric scans. Local textural fractal descriptors - which are invariant to affine intensity changes - are extracted from volumetric patches at different spatial resolutions from voxel lattice-based generated shape and scale Nakagami parameters. Using ultrasound radio-frequency datasets we found that after applying an adaptive fractal decomposition label transfer approach on top of the generated Nakagami voxels, tissue characterization results were superior to the state of art. Experimental results on real 3D ultrasonic pre-clinical and clinical datasets suggest that describing tumor intra-heterogeneity via this descriptor may facilitate improved prediction of therapy response and disease characterization. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  19. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  20. Lossless medical image compression with a hybrid coder

    NASA Astrophysics Data System (ADS)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  1. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  2. A database for assessment of effect of lossy compression on digital mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2018-03-01

    With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.

  3. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  4. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  5. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  6. Optimization of calcium carbonate content on synthesis of aluminum foam and its compressive strength characteristic

    NASA Astrophysics Data System (ADS)

    Sutarno, Nugraha, Bagja; Kusharjanto

    2017-01-01

    One of the most important characteristic of aluminum foam is compressive strength, which is reflected by its impact energy and Young's modulus. In the present research, optimization of calcium carbonate (CaCO3) content in the synthesized aluminum foam in order to obtain the highest compressive strength was carried out. The results of this study will be used to determine the CaCO3 content synthesis process parameter in pilot plant scale production of an aluminum foam. The experiment was performed by varying the concentration of calcium carbonate content, which was used as foaming agent, at constant alumina concentration (1.5 wt%), which was added as stabilizer, and temperature (725°C). It was found that 4 wt% CaCO3 gave the lowest relative density, which was 0.15, and the highest porosity, which was 85.29%, and compressive strength of as high as 0.26 Mpa. The pore morphology of the obtained aluminum foam at such condition was as follow: the average pore diameter was 4.42 mm, the wall thickness minimum of the pore was 83.24 µm, roundness of the pore was 0.91. Based on the fractal porosity, the compressive strength was inversely proportional to the porosity and huddled on a power law value of 2.91.

  7. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  8. Spatial and radiometric characterization of multi-spectrum satellite images through multi-fractal analysis

    NASA Astrophysics Data System (ADS)

    Alonso, Carmelo; Tarquis, Ana M.; Zúñiga, Ignacio; Benito, Rosa M.

    2017-03-01

    Several studies have shown that vegetation indexes can be used to estimate root zone soil moisture. Earth surface images, obtained by high-resolution satellites, presently give a lot of information on these indexes, based on the data of several wavelengths. Because of the potential capacity for systematic observations at various scales, remote sensing technology extends the possible data archives from the present time to several decades back. Because of this advantage, enormous efforts have been made by researchers and application specialists to delineate vegetation indexes from local scale to global scale by applying remote sensing imagery. In this work, four band images have been considered, which are involved in these vegetation indexes, and were taken by satellites Ikonos-2 and Landsat-7 of the same geographic location, to study the effect of both spatial (pixel size) and radiometric (number of bits coding the image) resolution on these wavelength bands as well as two vegetation indexes: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). In order to do so, a multi-fractal analysis of these multi-spectral images was applied in each of these bands and the two indexes derived. The results showed that spatial resolution has a similar scaling effect in the four bands, but radiometric resolution has a larger influence in blue and green bands than in red and near-infrared bands. The NDVI showed a higher sensitivity to the radiometric resolution than EVI. Both were equally affected by the spatial resolution. From both factors, the spatial resolution has a major impact in the multi-fractal spectrum for all the bands and the vegetation indexes. This information should be taken in to account when vegetation indexes based on different satellite sensors are obtained.

  9. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then,more » fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.« less

  10. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  11. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  12. Evaluation of Yogurt Microstructure Using Confocal Laser Scanning Microscopy and Image Analysis.

    PubMed

    Skytte, Jacob L; Ghita, Ovidiu; Whelan, Paul F; Andersen, Ulf; Møller, Flemming; Dahl, Anders B; Larsen, Rasmus

    2015-06-01

    The microstructure of protein networks in yogurts defines important physical properties of the yogurt and hereby partly its quality. Imaging this protein network using confocal scanning laser microscopy (CSLM) has shown good results, and CSLM has become a standard measuring technique for fermented dairy products. When studying such networks, hundreds of images can be obtained, and here image analysis methods are essential for using the images in statistical analysis. Previously, methods including gray level co-occurrence matrix analysis and fractal analysis have been used with success. However, a range of other image texture characterization methods exists. These methods describe an image by a frequency distribution of predefined image features (denoted textons). Our contribution is an investigation of the choice of image analysis methods by performing a comparative study of 7 major approaches to image texture description. Here, CSLM images from a yogurt fermentation study are investigated, where production factors including fat content, protein content, heat treatment, and incubation temperature are varied. The descriptors are evaluated through nearest neighbor classification, variance analysis, and cluster analysis. Our investigation suggests that the texton-based descriptors provide a fuller description of the images compared to gray-level co-occurrence matrix descriptors and fractal analysis, while still being as applicable and in some cases as easy to tune. © 2015 Institute of Food Technologists®

  13. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  14. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  15. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  16. Effect of quartz overgrowth precipitation on the multiscale porosity of sandstone: A (U)SANS and imaging analysis

    DOE PAGES

    Anovitz, Lawrence M.; Cole, David R.; Jackson, Andrew J.; ...

    2015-06-01

    We have performed a series of experiments to understand the effects of quartz overgrowths on nanometer to centimeter scale pore structures of sandstones. Blocks from two samples of St. Peter Sandstone with different initial porosities (5.8 and 18.3%) were reacted from 3 days to 7.5 months at 100 and 200 °C in aqueous solutions supersaturated with respect to quartz by reaction with amorphous silica. Porosity in the resultant samples was analyzed using small and ultrasmall angle neutron scattering and scanning electron microscope/backscattered electron (SEM/BSE)-based image-scale processing techniques.Significant changes were observed in the multiscale pore structures. By three days much ofmore » the overgrowth in the low-porosity sample dissolved away. The reason for this is uncertain, but the overgrowths can be clearly distinguished from the original core grains in the BSE images. At longer times the larger pores are observed to fill with plate-like precipitates. As with the unreacted sandstones, porosity is a step function of size. Grain boundaries are typically fractal, but no evidence of mass fractal or fuzzy interface behavior was observed suggesting a structural difference between chemical and clastic sediments. After the initial loss of the overgrowths, image scale porosity (>~1 cm) decreases with time. Submicron porosity (typically ~25% of the total) is relatively constant or slightly decreasing in absolute terms, but the percent change is significant. Fractal dimensions decrease at larger scales, and increase at smaller scales with increased precipitation.« less

  17. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  18. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  19. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  20. Age-dependence of power spectral density and fractal dimension of bone mineralized matrix in atomic force microscope topography images: potential correlates of bone tissue age and bone fragility in female femoral neck trabeculae

    PubMed Central

    Milovanovic, Petar; Djuric, Marija; Rakocevic, Zlatko

    2012-01-01

    There is an increasing interest in bone nano-structure, the ultimate goal being to reveal the basis of age-related bone fragility. In this study, power spectral density (PSD) data and fractal dimensions of the mineralized bone matrix were extracted from atomic force microscope topography images of the femoral neck trabeculae. The aim was to evaluate age-dependent differences in the mineralized matrix of human bone and to consider whether these advanced nano-descriptors might be linked to decreased bone remodeling observed by some authors and age-related decline in bone mechanical competence. The investigated bone specimens belonged to a group of young adult women (n = 5, age: 20–40 years) and a group of elderly women (n = 5, age: 70–95 years) without bone diseases. PSD graphs showed the roughness density distribution in relation to spatial frequency. In all cases, there was a fairly linear decrease in magnitude of the power spectra with increasing spatial frequencies. The PSD slope was steeper in elderly individuals (−2.374 vs. −2.066), suggesting the dominance of larger surface morphological features. Fractal dimension of the mineralized bone matrix showed a significant negative trend with advanced age, declining from 2.467 in young individuals to 2.313 in the elderly (r = 0.65, P = 0.04). Higher fractal dimension in young women reflects domination of smaller mineral grains, which is compatible with the more freshly remodeled structure. In contrast, the surface patterns in elderly individuals were indicative of older tissue age. Lower roughness and reduced structural complexity (decreased fractal dimension) of the interfibrillar bone matrix in the elderly suggest a decline in bone toughness, which explains why aged bone is more brittle and prone to fractures. PMID:22946475

  1. Age-dependence of power spectral density and fractal dimension of bone mineralized matrix in atomic force microscope topography images: potential correlates of bone tissue age and bone fragility in female femoral neck trabeculae.

    PubMed

    Milovanovic, Petar; Djuric, Marija; Rakocevic, Zlatko

    2012-11-01

    There is an increasing interest in bone nano-structure, the ultimate goal being to reveal the basis of age-related bone fragility. In this study, power spectral density (PSD) data and fractal dimensions of the mineralized bone matrix were extracted from atomic force microscope topography images of the femoral neck trabeculae. The aim was to evaluate age-dependent differences in the mineralized matrix of human bone and to consider whether these advanced nano-descriptors might be linked to decreased bone remodeling observed by some authors and age-related decline in bone mechanical competence. The investigated bone specimens belonged to a group of young adult women (n = 5, age: 20-40 years) and a group of elderly women (n = 5, age: 70-95 years) without bone diseases. PSD graphs showed the roughness density distribution in relation to spatial frequency. In all cases, there was a fairly linear decrease in magnitude of the power spectra with increasing spatial frequencies. The PSD slope was steeper in elderly individuals (-2.374 vs. -2.066), suggesting the dominance of larger surface morphological features. Fractal dimension of the mineralized bone matrix showed a significant negative trend with advanced age, declining from 2.467 in young individuals to 2.313 in the elderly (r = 0.65, P = 0.04). Higher fractal dimension in young women reflects domination of smaller mineral grains, which is compatible with the more freshly remodeled structure. In contrast, the surface patterns in elderly individuals were indicative of older tissue age. Lower roughness and reduced structural complexity (decreased fractal dimension) of the interfibrillar bone matrix in the elderly suggest a decline in bone toughness, which explains why aged bone is more brittle and prone to fractures. © 2012 The Authors Journal of Anatomy © 2012 Anatomical Society.

  2. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  3. Fractal analysis of plaque border, a novel method for the quantification of atherosclerotic plaque contour irregularity, is associated with pro-atherogenic plasma lipid profile in subjects with non-obstructive carotid stenoses.

    PubMed

    Moroni, Francesco; Magnoni, Marco; Vergani, Vittoria; Ammirati, Enrico; Camici, Paolo G

    2018-01-01

    Plaque border irregularity is a known imaging characteristic of vulnerable plaques, but its evaluation heavily relies on subjective evaluation and operator expertise. Aim of the present work is to propose a novel fractal-analysis based method for the quantification of atherosclerotic plaque border irregularity and assess its relation with cardiovascular risk factors. Forty-two asymptomatic subjects with carotid stenosis underwent ultrasound evaluation and assessment of cardiovascular risk factors. Total, low-density lipoprotein (LDL), high-density lipoprotein (HDL) plasma cholesterol and triglycerides concentrations were measured for each subject. Fractal analysis was performed in all the carotid segments affected by atherosclerosis, i.e. 147 segments. The resulting fractal dimension (FD) is a measure of irregularity of plaque profile on long axis view of the plaque. FD in the severest stenosis (main plaque FD,mFD) was 1.136±0.039. Average FD per patient (global FD,gFD) was 1.145±0.039. FD was independent of other plaque characteristics. mFD significantly correlated with plasma HDL (r = -0.367,p = 0.02) and triglycerides-to-HDL ratio (r = 0.480,p = 0.002). Fractal analysis is a novel, readily available, reproducible and inexpensive technique for the quantitative measurement of plaque irregularity. The correlation between low HDL levels and plaque FD suggests a role for HDL in the acquisition of morphologic features of plaque instability. Further studies are needed to validate the prognostic value of fractal analysis in carotid plaques evaluation.

  4. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  5. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  6. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  7. Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Liu, Ti C.; Mitra, Sunanda

    1996-06-01

    Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.

  8. Artifacts in slab average-intensity-projection images reformatted from JPEG 2000 compressed thin-section abdominal CT data sets.

    PubMed

    Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon

    2008-06-01

    The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.

  9. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  10. Measuring surface topography by scanning electron microscopy. II. Analysis of three estimators of surface roughness in second dimension and third dimension.

    PubMed

    Bonetto, Rita Dominga; Ladaga, Juan Luis; Ponz, Ezequiel

    2006-04-01

    Scanning electron microscopy (SEM) is widely used in surface studies and continuous efforts are carried out in the search of estimators of different surface characteristics. By using the variogram, we developed two of these estimators that were used to characterize the surface roughness from the SEM image texture. One of the estimators is related to the crossover between fractal region at low scale and the periodic region at high scale, whereas the other estimator characterizes the periodic region. In this work, a full study of these estimators and the fractal dimension in two dimensions (2D) and three dimensions (3D) was carried out for emery papers. We show that the obtained fractal dimension with only one image is good enough to characterize the roughness surface because its behavior is similar to those obtained with 3D height data. We show also that the estimator that indicates the crossover is related to the minimum cell size in 2D and to the average particle size in 3D. The other estimator has different values for the three studied emery papers in 2D but it does not have a clear meaning, and these values are similar for those studied samples in 3D. Nevertheless, it indicates the formation tendency of compound cells. The fractal dimension values from the variogram and from an area versus step log-log graph were studied with 3D data. Both methods yield different values corresponding to different information from the samples.

  11. Comparison of caprock pore networks which potentially will be impacted by carbon sequestration projects.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCray, John; Navarre-Sitchler, Alexis; Mouzakis, Katherine

    Injection of CO2 into underground rock formations can reduce atmospheric CO2 emissions. Caprocks present above potential storage formations are the main structural trap inhibiting CO2 from leaking into overlying aquifers or back to the Earth's surface. Dissolution and precipitation of caprock minerals resulting from reaction with CO2 may alter the pore network where many pores are of the micrometer to nanometer scale, thus altering the structural trapping potential of the caprock. However, the distribution, geometry and volume of pores at these scales are poorly characterized. In order to evaluate the overall risk of leakage of CO2 from storage formations, amore » first critical step is understanding the distribution and shape of pores in a variety of different caprocks. As the caprock is often comprised of mudstones, we analyzed samples from several mudstone formations with small angle neutron scattering (SANS) and high-resolution transmission electron microscopy (TEM) imaging to compare the pore networks. Mudstones were chosen from current or potential sites for carbon sequestration projects including the Marine Tuscaloosa Group, the Lower Tuscaloosa Group, the upper and lower shale members of the Kirtland Formation, and the Pennsylvanian Gothic shale. Expandable clay contents ranged from 10% to approximately 40% in the Gothic shale and Kirtland Formation, respectively. During SANS, neutrons effectively scatter from interfaces between materials with differing scattering length density (i.e., minerals and pores). The intensity of scattered neutrons, I(Q), where Q is the scattering vector, gives information about the volume and arrangement of pores in the sample. The slope of the scattering data when plotted as log I(Q) vs. log Q provides information about the fractality or geometry of the pore network. On such plots slopes from -2 to -3 represent mass fractals while slopes from -3 to -4 represent surface fractals. Scattering data showed surface fractal dimensions for the Kirtland formation and one sample from the Tuscaloosa formation close to 3, indicating very rough surfaces. In contrast, scattering data for the Gothic shale formation exhibited mass fractal behavior. In one sample of the Tuscaloosa formation the data are described by a surface fractal at low Q (larger pores) and a mass fractal at high Q (smaller pores), indicating two pore populations contributing to the scattering behavior. These small angle neutron scattering results, combined with high-resolution TEM imaging, provided a means for both qualitative and quantitative analysis of the differences in pore networks between these various mudstones.« less

  12. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  13. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  14. Relation between the Hurst Exponent and the Efficiency of Self-organization of a Deformable System

    NASA Astrophysics Data System (ADS)

    Alfyorova, E. A.; Lychagin, D. V.

    2018-04-01

    We have established the degree of self-organization of a system under plastic deformation at different scale levels. Using fractal analysis, we have determined the Hurst exponent and correlation lengths in the region of formation of a corrugated (wrinkled) structure in [111] nickel single crystals under compression. This has made it possible to single out two (micro-and meso-) levels of self-organization in the deformable system. A qualitative relation between the values of the Hurst exponent and the stages of the stress-strain curve has been established.

  15. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  16. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  17. Soil structure characterized using computed tomographic images

    Treesearch

    Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek

    2003-01-01

    Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...

  18. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  19. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  20. Compression of regions in the global advanced very high resolution radiometer 1-km data set

    NASA Technical Reports Server (NTRS)

    Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.

    1994-01-01

    The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.

  1. A new efficient method for color image compression based on visual attention mechanism

    NASA Astrophysics Data System (ADS)

    Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang

    2010-11-01

    One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.

  2. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  3. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  4. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  5. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  6. Compression of the Global Land 1-km AVHRR dataset

    USGS Publications Warehouse

    Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.

    1996-01-01

    Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.

  7. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  8. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  9. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  10. High efficient optical remote sensing images acquisition for nano-satellite-framework

    NASA Astrophysics Data System (ADS)

    Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi

    2017-09-01

    It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.

  11. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.

  12. Predicting Bone Mechanical Properties of Cancellous Bone from DXA, MRI, and Fractal Dimensional Measurements

    NASA Technical Reports Server (NTRS)

    Harrigan, Timothy P.; Ambrose, Catherine G.; Hogan, Harry A.; Shackleford, Linda; Webster, Laurie; LeBlanc, Adrian; Lin, Chen; Evans, Harlan

    1997-01-01

    This project was aimed at making predictions of bone mechanical properties from non-invasive DXA and MRI measurements. Given the bone mechanical properties, stress calculations can be made to compare normal bone stresses to the stresses developed in exercise countermeasures against bone loss during space flight. These calculations in turn will be used to assess whether mechanical factors can explain bone loss in space. In this study we assessed the use of T2(sup *) MRI imaging, DXA, and fractal dimensional analysis to predict strength and stiffness in cancellous bone.

  13. Random-fractal Ansatz for the configurations of two-dimensional critical systems

    NASA Astrophysics Data System (ADS)

    Lee, Ching Hua; Ozaki, Dai; Matsueda, Hiroaki

    2016-12-01

    Critical systems have always intrigued physicists and precipitated the development of new techniques. Recently, there has been renewed interest in the information contained in the configurations of classical critical systems, whose computation do not require full knowledge of the wave function. Inspired by holographic duality, we investigated the entanglement properties of the classical configurations (snapshots) of the Potts model by introducing an Ansatz ensemble of random fractal images. By virtue of the central limit theorem, our Ansatz accurately reproduces the entanglement spectra of actual Potts snapshots without any fine tuning of parameters or artificial restrictions on ensemble choice. It provides a microscopic interpretation of the results of previous studies, which established a relation between the scaling behavior of snapshot entropy and the critical exponent. More importantly, it elucidates the role of ensemble disorder in restoring conformal invariance, an aspect previously ignored. Away from criticality, the breakdown of scale invariance leads to a renormalization of the parameter Σ in the random fractal Ansatz, whose variation can be used as an alternative determination of the critical exponent. We conclude by providing a recipe for the explicit construction of fractal unit cells consistent with a given scaling exponent.

  14. EUS elastography (strain ratio) and fractal-based quantitative analysis for the diagnosis of solid pancreatic lesions.

    PubMed

    Carrara, Silvia; Di Leo, Milena; Grizzi, Fabio; Correale, Loredana; Rahal, Daoud; Anderloni, Andrea; Auriemma, Francesco; Fugazza, Alessandro; Preatoni, Paoletta; Maselli, Roberta; Hassan, Cesare; Finati, Elena; Mangiavillano, Benedetto; Repici, Alessandro

    2018-06-01

    EUS elastography is useful in characterizing solid pancreatic lesions (SPLs), and fractal analysis-based technology has been used to evaluate geometric complexity in oncology. The aim of this study was to evaluate EUS elastography (strain ratio) and fractal analysis for the characterization of SPLs. Consecutive patients with SPLs were prospectively enrolled between December 2015 and February 2017. Elastographic evaluation included parenchymal strain ratio (pSR) and wall strain ratio (wSR) and was performed with a new compact US processor. Elastographic images were analyzed using a computer program to determine the 3-dimensional histogram fractal dimension. A composite cytology/histology/clinical reference standard was used to assess sensitivity, specificity, positive predictive value, negative predictive value, and area under the receiver operating curve. Overall, 102 SPLs from 100 patients were studied. At final diagnosis, 69 (68%) were malignant and 33 benign. At elastography, both pSR and wSR appeared to be significantly higher in malignant as compared with benign SPLs (pSR, 24.5 vs 6.4 [P < .001]; wSR, 56.6 vs 15.3 [P < .001]). When the best cut-off levels of pSR and wSR at 9.10 and 16.2, respectively, were used, sensitivity, specificity, positive predictive value, negative predictive value, and area under the receiver operating curve were 88.4%, 78.8%, 89.7%, 76.9%, and 86.7% and 91.3%, 69.7%, 86.5%, 80%, and 85.7%, respectively. Fractal analysis showed a significant statistical difference (P = .0087) between the mean surface fractal dimension of malignant lesions (D = 2.66 ± .01) versus neuroendocrine tumor (D = 2.73 ± .03) and a statistical difference for all 3 channels red, green, and blue (P < .0001). EUS elastography with pSR and fractal-based analysis are useful in characterizing SPLs. (Clinical trial registration number: NCT02855151.). Copyright © 2018 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  15. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  16. Exhaled Aerosol Pattern Discloses Lung Structural Abnormality: A Sensitivity Study Using Computational Modeling and Fractal Analysis

    PubMed Central

    Xi, Jinxiang; Si, Xiuhua A.; Kim, JongWon; Mckee, Edward; Lin, En-Bing

    2014-01-01

    Background Exhaled aerosol patterns, also called aerosol fingerprints, provide clues to the health of the lung and can be used to detect disease-modified airway structures. The key is how to decode the exhaled aerosol fingerprints and retrieve the lung structural information for a non-invasive identification of respiratory diseases. Objective and Methods In this study, a CFD-fractal analysis method was developed to quantify exhaled aerosol fingerprints and applied it to one benign and three malign conditions: a tracheal carina tumor, a bronchial tumor, and asthma. Respirations of tracer aerosols of 1 µm at a flow rate of 30 L/min were simulated, with exhaled distributions recorded at the mouth. Large eddy simulations and a Lagrangian tracking approach were used to simulate respiratory airflows and aerosol dynamics. Aerosol morphometric measures such as concentration disparity, spatial distributions, and fractal analysis were applied to distinguish various exhaled aerosol patterns. Findings Utilizing physiology-based modeling, we demonstrated substantial differences in exhaled aerosol distributions among normal and pathological airways, which were suggestive of the disease location and extent. With fractal analysis, we also demonstrated that exhaled aerosol patterns exhibited fractal behavior in both the entire image and selected regions of interest. Each exhaled aerosol fingerprint exhibited distinct pattern parameters such as spatial probability, fractal dimension, lacunarity, and multifractal spectrum. Furthermore, a correlation of the diseased location and exhaled aerosol spatial distribution was established for asthma. Conclusion Aerosol-fingerprint-based breath tests disclose clues about the site and severity of lung diseases and appear to be sensitive enough to be a practical tool for diagnosis and prognosis of respiratory diseases with structural abnormalities. PMID:25105680

  17. A Brief Historical Introduction to Fractals and Fractal Geometry

    ERIC Educational Resources Information Center

    Debnath, Lokenath

    2006-01-01

    This paper deals with a brief historical introduction to fractals, fractal dimension and fractal geometry. Many fractals including the Cantor fractal, the Koch fractal, the Minkowski fractal, the Mandelbrot and Given fractal are described to illustrate self-similar geometrical figures. This is followed by the discovery of dynamical systems and…

  18. Interactions between a fractal tree-like object and hydrodynamic turbulence: flow structure and characteristic mixing length

    NASA Astrophysics Data System (ADS)

    Meneveau, C. V.; Bai, K.; Katz, J.

    2011-12-01

    The vegetation canopy has a significant impact on various physical and biological processes such as forest microclimate, rainfall evaporation distribution and climate change. Most scaled laboratory experimental studies have used canopy element models that consist of rigid vertical strips or cylindrical rods that can be typically represented through only one or a few characteristic length scales, for example the diameter and height for cylindrical rods. However, most natural canopies and vegetation are highly multi-scale with branches and sub-branches, covering a wide range of length scales. Fractals provide a convenient idealization of multi-scale objects, since their multi-scale properties can be described in simple ways (Mandelbrot 1982). While fractal aspects of turbulence have been studied in several works in the past decades, research on turbulence generated by fractal objects started more recently. We present an experimental study of boundary layer flow over fractal tree-like objects. Detailed Particle-Image-Velocimetry (PIV) measurements are carried out in the near-wake of a fractal-like tree. The tree is a pre-fractal with five generations, with three branches and a scale reduction factor 1/2 at each generation. Its similarity fractal dimension (Mandelbrot 1982) is D ~ 1.58. Detailed mean velocity and turbulence stress profiles are documented, as well as their downstream development. We then turn attention to the turbulence mixing properties of the flow, specifically to the question whether a mixing length-scale can be identified in this flow, and if so, how it relates to the geometric length-scales in the pre-fractal object. Scatter plots of mean velocity gradient (shear) and Reynolds shear stress exhibit good linear relation at all locations in the flow. Therefore, in the transverse direction of the wake evolution, the Boussinesq eddy viscosity concept is appropriate to describe the mixing. We find that the measured mixing length increases with increasing streamwise locations. Conversely, the measured eddy viscosity and mixing length decrease with increasing elevation, which differs from eddy viscosity and mixing length behaviors of traditional boundary layers or canopies studied before. In order to find an appropriate length for the flow, several models based on the notion of superposition of scales are proposed and examined. One approach is based on spectral distributions. Another more practical approach is based on length-scale distributions evaluated using fractal geometry tools. These proposed models agree well with the measured mixing length. The results indicate that information about multi-scale clustering of branches as it occurs in fractals has to be incorporated into models of the mixing length for flows through canopies with multiple scales. The research is supported by National Science Foundation grant ATM-0621396 and AGS-1047550.

  19. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  20. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  1. An image assessment study of image acceptability of the Galileo low gain antenna mission

    NASA Technical Reports Server (NTRS)

    Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming

    1994-01-01

    This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.

  2. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  3. Crunchiness Loss and Moisture Toughening in Puffed Cereals and Snacks.

    PubMed

    Peleg, Micha

    2015-09-01

    Upon moisture uptake, dry cellular cereals and snacks loose their brittleness and become soggy. This familiar phenomenon is manifested in smoothing their compressive force-displacement curves. These curves' degree of jaggedness, expressed by their apparent fractal dimension, can serve as an instrumental measure of the particles' crunchiness. The relationship between the apparent fractal dimension and moisture content or water activity has a characteristic sigmoid shape. The relationship between the sensorily perceived crunchiness and moisture also has a sigmoid shape whose inflection point lies at about the same location. The transition between the brittle and soggy states, however, appears sharper in the apparent fractal dimension compared with moisture plot. Less familiar is the observation that at moderate levels of moisture content, while the particles' crunchiness is being lost, their stiffness actually rises, a phenomenon that can be dubbed "moisture toughening." We show this phenomenon in commercial Peanut Butter Crunch® (sweet starch-based cereal), Cheese Balls (salty starch-based snack), and Pork Rind also known as "Chicharon" (salty deep-fried pork skin), 3 crunchy foods that have very different chemical composition. We also show that in the first 2 foods, moisture toughening was perceived sensorily as increased "hardness." We have concluded that the partial plasticization, which caused the brittleness loss, also inhibited failure propagation, which allowed the solid matrix to sustain higher stresses. This can explain other published reports of the phenomenon in different foods and model systems. © 2015 Institute of Food Technologists®

  4. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  5. Temporal fractal analysis of the rs-BOLD signal identifies brain abnormalities in autism spectrum disorder.

    PubMed

    Dona, Olga; Hall, Geoffrey B; Noseworthy, Michael D

    2017-01-01

    Brain connectivity in autism spectrum disorders (ASD) has proven difficult to characterize due to the heterogeneous nature of the spectrum. Connectivity in the brain occurs in a complex, multilevel and multi-temporal manner, driving the fluctuations observed in local oxygen demand. These fluctuations can be characterized as fractals, as they auto-correlate at different time scales. In this study, we propose a model-free complexity analysis based on the fractal dimension of the rs-BOLD signal, acquired with magnetic resonance imaging. The fractal dimension can be interpreted as measure of signal complexity and connectivity. Previous studies have suggested that reduction in signal complexity can be associated with disease. Therefore, we hypothesized that a detectable difference in rs-BOLD signal complexity could be observed between ASD patients and Controls. Anatomical and functional data from fifty-five subjects with ASD (12.7 ± 2.4 y/o) and 55 age-matched (14.1 ± 3.1 y/o) healthy controls were accessed through the NITRC database and the ABIDE project. Subjects were scanned using a 3T GE Signa MRI and a 32-channel RF-coil. Axial FSPGR-3D images were used to prescribe rs-BOLD (TE/TR = 30/2000ms) where 300 time points were acquired. Motion correction was performed on the functional data and anatomical and functional images were aligned and spatially warped to the N27 standard brain atlas. Fractal analysis, performed on a grey matter mask, was done by estimating the Hurst exponent in the frequency domain using a power spectral density approach and refining the estimation in the time domain with de-trended fluctuation analysis and signal summation conversion methods. Voxel-wise fractal dimension (FD) was calculated for every subject in the control group and in the ASD group to create ROI-based Z-scores for the ASD patients. Voxel-wise validation of FD normality across controls was confirmed, and non-Gaussian voxels were eliminated from subsequent analysis. To maintain a 95% confidence level, only regions where Z-score values were at least 2 standard deviations away from the mean (i.e. where |Z| > 2.0) were included in the analysis. We found that the main regions, where signal complexity significantly decreased among ASD patients, were the amygdala (p = 0.001), the vermis (p = 0.02), the basal ganglia (p = 0.01) and the hippocampus (p = 0.02). No regions reported significant increase in signal complexity in this study. Our findings were correlated with ADIR and ADOS assessment tools, reporting the highest correlation with the ADOS metrics. Brain connectivity is best modeled as a complex system. Therefore, a measure of complexity as the fractal dimension of fluctuations in brain oxygen demand and utilization could provide important information about connectivity issues in ASD. Moreover, this technique can be used in the characterization of a single subject, with respect to controls, without the need for group analysis. Our novel approach provides an ideal avenue for personalized diagnostics, thus providing unique patient specific assessment that could help in individualizing treatments.

  6. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  7. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  8. Deformation Failure Characteristics of Coal Body and Mining Induced Stress Evolution Law

    PubMed Central

    Wen, Zhijie; Wen, Jinhao; Shi, Yongkui; Jia, Chuanyang

    2014-01-01

    The results of the interaction between coal failure and mining pressure field evolution during mining are presented. Not only the mechanical model of stope and its relative structure division, but also the failure and behavior characteristic of coal body under different mining stages are built and demonstrated. Namely, the breaking arch and stress arch which influence the mining area are quantified calculated. A systematic method of stress field distribution is worked out. All this indicates that the pore distribution of coal body with different compressed volume has fractal character; it appears to be the linear relationship between propagation range of internal stress field and compressed volume of coal body and nonlinear relationship between the range of outburst coal mass and the number of pores which is influenced by mining pressure. The results provide theory reference for the research on the range of mining-induced stress and broken coal wall. PMID:24967438

  9. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  10. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  11. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    NASA Astrophysics Data System (ADS)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  12. Radiomics-based differentiation of lung disease models generated by polluted air based on X-ray computed tomography data.

    PubMed

    Szigeti, Krisztián; Szabó, Tibor; Korom, Csaba; Czibak, Ilona; Horváth, Ildikó; Veres, Dániel S; Gyöngyi, Zoltán; Karlinger, Kinga; Bergmann, Ralf; Pócsik, Márta; Budán, Ferenc; Máthé, Domokos

    2016-02-11

    Lung diseases (resulting from air pollution) require a widely accessible method for risk estimation and early diagnosis to ensure proper and responsive treatment. Radiomics-based fractal dimension analysis of X-ray computed tomography attenuation patterns in chest voxels of mice exposed to different air polluting agents was performed to model early stages of disease and establish differential diagnosis. To model different types of air pollution, BALBc/ByJ mouse groups were exposed to cigarette smoke combined with ozone, sulphur dioxide gas and a control group was established. Two weeks after exposure, the frequency distributions of image voxel attenuation data were evaluated. Specific cut-off ranges were defined to group voxels by attenuation. Cut-off ranges were binarized and their spatial pattern was associated with calculated fractal dimension, then abstracted by the fractal dimension -- cut-off range mathematical function. Nonparametric Kruskal-Wallis (KW) and Mann-Whitney post hoc (MWph) tests were used. Each cut-off range versus fractal dimension function plot was found to contain two distinctive Gaussian curves. The ratios of the Gaussian curve parameters are considerably significant and are statistically distinguishable within the three exposure groups. A new radiomics evaluation method was established based on analysis of the fractal dimension of chest X-ray computed tomography data segments. The specific attenuation patterns calculated utilizing our method may diagnose and monitor certain lung diseases, such as chronic obstructive pulmonary disease (COPD), asthma, tuberculosis or lung carcinomas.

  13. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  14. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  15. Reliability of Using Retinal Vascular Fractal Dimension as a Biomarker in the Diabetic Retinopathy Detection

    PubMed Central

    Zhang, Jiong; Bekkers, Erik; Abbasi-Sureshjani, Samaneh

    2016-01-01

    The retinal fractal dimension (FD) is a measure of vasculature branching pattern complexity. FD has been considered as a potential biomarker for the detection of several diseases like diabetes and hypertension. However, conflicting findings were found in the reported literature regarding the association between this biomarker and diseases. In this paper, we examine the stability of the FD measurement with respect to (1) different vessel annotations obtained from human observers, (2) automatic segmentation methods, (3) various regions of interest, (4) accuracy of vessel segmentation methods, and (5) different imaging modalities. Our results demonstrate that the relative errors for the measurement of FD are significant and FD varies considerably according to the image quality, modality, and the technique used for measuring it. Automated and semiautomated methods for the measurement of FD are not stable enough, which makes FD a deceptive biomarker in quantitative clinical applications. PMID:27703803

  16. Reliability of Using Retinal Vascular Fractal Dimension as a Biomarker in the Diabetic Retinopathy Detection.

    PubMed

    Huang, Fan; Dashtbozorg, Behdad; Zhang, Jiong; Bekkers, Erik; Abbasi-Sureshjani, Samaneh; Berendschot, Tos T J M; Ter Haar Romeny, Bart M

    2016-01-01

    The retinal fractal dimension (FD) is a measure of vasculature branching pattern complexity. FD has been considered as a potential biomarker for the detection of several diseases like diabetes and hypertension. However, conflicting findings were found in the reported literature regarding the association between this biomarker and diseases. In this paper, we examine the stability of the FD measurement with respect to (1) different vessel annotations obtained from human observers, (2) automatic segmentation methods, (3) various regions of interest, (4) accuracy of vessel segmentation methods, and (5) different imaging modalities. Our results demonstrate that the relative errors for the measurement of FD are significant and FD varies considerably according to the image quality, modality, and the technique used for measuring it. Automated and semiautomated methods for the measurement of FD are not stable enough, which makes FD a deceptive biomarker in quantitative clinical applications.

  17. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.

  18. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  19. Clinical utility of wavelet compression for resolution-enhanced chest radiography

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.

    2000-05-01

    This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.

  20. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  1. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  2. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  3. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  4. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  5. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  6. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  7. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  8. Advances in active control and optimization in turbulence

    NASA Astrophysics Data System (ADS)

    Freeman, Aaron Paul

    The main objective of this research is to explore the effectiveness of pulsed plasma actuators for turbulence control. In particular, a pulsed plasma actuator is used in this research to implement active control, in the form of a localized body force, over turbulent separated shear layers. Applications of tins research include controlling the formation and distribution of large scale turbulent structures and optimizing turbulence-aberrated laser propagation. This research is primarily experimental, with the motivation for the work derived from theoretical analysis of a turbulent shear layer. The experimental work is considered within two primary flow regimes, compressible and incompressible. For both cases, a turbulent shear layer is generated and then forced with plasma which is introduced periodically at frequencies ranging between 1.0 kHz and 25.0 kHz. The Reynolds numbers, based on visual thickness, of the compressible and incompressible flows investigated in this research are 6.0 106 and 8.0 104 respectively. Experimental results for the compressible case, based on Shack-Hartmann profiling of turbulence-aberrated laser wavefronts, for laser propagation through forced and unforced shear flows show reductions in the laser aberrations of up to 27.5% with a pulsing frequency of 5.0 kHz as well as increases of up to 16.9% with a pulsing frequency of 1.0 kHz. Other pulsing frequencies within the specified range were experimental analyzed and found to exhibit little or no significant change in the laser aberrations compared to the unforced case. The direct results from the Shack-Hartmann wavefront sensor are used to calculate the power spectra of the recorded Optical Path Difference profiles to verify the correlation between large aero-optical aberrations and propagation through large turbulent structures. Shadowgraph imaging of the compressible flow field was conducted to visually demonstrate the same. The experimental procedure for the incompressible shear layer involves imaging the flow field using fog-Mie scattering. The analysis for the resulting incompressible shear layer images include investigations of the distribution of large scale structures and the associated effects that periodic forcing has on the shear layer relating to mixing enhancement and scalar geometry. The effects of periodic forcing on mixing will be determined based on the scalar probability density function and the scalar power spectrum. In addition, the geometry of the scalar interfaces will be examined in terms of the generalized fractal dimension to determine the effects that periodic forcing has on the scale dependency of self-similarity within the flow field. Results from the experiments for the incompressible shear layer show that mixing can be increased by up to 8.4% as determined based on increases within the intermediate scalar probability density function and decreased by as much as 30.8% at forcing frequencies of 25.0 kHz and 1.0 kHz respectively. Additionally, this research shows that the extent of the range of scales of geometrical self-similarity of iso-concentration interfaces extracted from the flow images can be increased by up to 75.0% or reduced by as much as 75.0% depending on the forcing frequency applied. These results show that aero-optical interactions in a compressible shear layer as well as both mixing and the interfacial geometry in incompressible shear layers can be substantially modified by the periodic forcing.

  9. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  10. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  11. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  12. IFSM fractal image compression with entropy and sparsity constraints: A sequential quadratic programming approach

    NASA Astrophysics Data System (ADS)

    Kunze, Herb; La Torre, Davide; Lin, Jianyi

    2017-01-01

    We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.

  13. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  14. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  15. Classification of microscopic images of breast tissue

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia; Franzen, Lennart

    2004-05-01

    Breast cancer is the most common form of cancer among women. The diagnosis is usually performed by the pathologist, that subjectively evaluates tissue samples. The aim of our research is to develop techniques for the automatic classification of cancerous tissue, by analyzing histological samples of intact tissue taken with a biopsy. In our study, we considered 200 images presenting four different conditions: normal tissue, fibroadenosis, ductal cancer and lobular cancer. Methods to extract features have been investigated and described. One method is based on granulometries, which are size-shape descriptors widely used in mathematical morphology. Applications of granulometries lead to distribution functions whose moments are used as features. A second method is based on fractal geometry, that seems very suitable to quantify biological structures. The fractal dimension of binary images has been computed using the euclidean distance mapping. Image classification has then been performed using the extracted features as input of a back-propagation neural network. A new method that combines genetic algorithms and morphological filters has been also investigated. In this case, the classification is based on a correlation measure. Very encouraging results have been obtained with pilot experiments using a small subset of images as training set. Experimental results indicate the effectiveness of the proposed methods. Cancerous tissue was correctly classified in 92.5% of the cases.

  16. Using fractal analysis of thermal signatures for thyroid disease evaluation

    NASA Astrophysics Data System (ADS)

    Gavriloaia, Gheorghe; Sofron, Emil; Gavriloaia, Mariuca-Roxana; Ghemigean, Adina-Mariana

    2010-11-01

    The skin is the largest organ of the body and it protects against heat, light, injury and infection. Skin temperature is an important parameter for diagnosing diseases. Thermal analysis is non-invasive, painless, and relatively inexpensive, showing a great potential research. Since the thyroid regulates metabolic rate it is intimately connected to body temperature, more than, any modification of its function generates a specific thermal image on the neck skin. The shapes of thermal signatures are often irregular in size and shape. Euclidean geometry is not able to evaluate their shape for different thyroid diseases, and fractal geometry is used in this paper. Different thyroid diseases generate different shapes, and their complexity are evaluated by specific mathematical approaches, fractal analysis, in order to the evaluate selfsimilarity and lacunarity. Two kinds of thyroid diseases, hyperthyroidism and papillary cancer are analyzed in this paper. The results are encouraging and show the ability to continue research for thermal signature to be used in early diagnosis of thyroid diseases.

  17. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  18. Wavelet/scalar quantization compression standard for fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy

    Purpose: The objective of this study was to assess the complexity of human visual search activity during mammographic screening using fractal analysis and to investigate its relationship with case and reader characteristics. Methods: The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus data for this study. The fractal dimension of the readers’ visual scanning patternsmore » was computed with the Minkowski–Bouligand box-counting method and used as a measure of gaze complexity. Individual factor and group-based interaction ANOVA analysis was performed to study the association between fractal dimension, case pathology, breast density, and reader experience level. The consistency of the observed trends depending on gaze data representation was also examined. Results: Case pathology, breast density, reader experience level, and individual reader differences are all independent predictors of the visual scanning pattern complexity when screening for breast cancer. No higher order effects were found to be significant. Conclusions: Fractal characterization of visual search behavior during mammographic screening is dependent on case properties and image reader characteristics.« less

  20. Application of fractal and grey level co-occurrence matrix analysis in evaluation of brain corpus callosum and cingulum architecture.

    PubMed

    Pantic, Igor; Dacic, Sanja; Brkic, Predrag; Lavrnja, Irena; Pantic, Senka; Jovanovic, Tomislav; Pekovic, Sanja

    2014-10-01

    This aim of this study was to assess the discriminatory value of fractal and grey level co-occurrence matrix (GLCM) analysis methods in standard microscopy analysis of two histologically similar brain white mass regions that have different nerve fiber orientation. A total of 160 digital micrographs of thionine-stained rat brain white mass were acquired using a Pro-MicroScan DEM-200 instrument. Eighty micrographs from the anterior corpus callosum and eighty from the anterior cingulum areas of the brain were analyzed. The micrographs were evaluated using the National Institutes of Health ImageJ software and its plugins. For each micrograph, seven parameters were calculated: angular second moment, inverse difference moment, GLCM contrast, GLCM correlation, GLCM variance, fractal dimension, and lacunarity. Using the Receiver operating characteristic analysis, the highest discriminatory value was determined for inverse difference moment (IDM) (area under the receiver operating characteristic (ROC) curve equaled 0.925, and for the criterion IDM≤0.610 the sensitivity and specificity were 82.5 and 87.5%, respectively). Most of the other parameters also showed good sensitivity and specificity. The results indicate that GLCM and fractal analysis methods, when applied together in brain histology analysis, are highly capable of discriminating white mass structures that have different axonal orientation.

  1. A Web platform for the interactive visualization and analysis of the 3D fractal dimension of MRI data.

    PubMed

    Jiménez, J; López, A M; Cruz, J; Esteban, F J; Navas, J; Villoslada, P; Ruiz de Miras, J

    2014-10-01

    This study presents a Web platform (http://3dfd.ujaen.es) for computing and analyzing the 3D fractal dimension (3DFD) from volumetric data in an efficient, visual and interactive way. The Web platform is specially designed for working with magnetic resonance images (MRIs) of the brain. The program estimates the 3DFD by calculating the 3D box-counting of the entire volume of the brain, and also of its 3D skeleton. All of this is done in a graphical, fast and optimized way by using novel technologies like CUDA and WebGL. The usefulness of the Web platform presented is demonstrated by its application in a case study where an analysis and characterization of groups of 3D MR images is performed for three neurodegenerative diseases: Multiple Sclerosis, Intrauterine Growth Restriction and Alzheimer's disease. To the best of our knowledge, this is the first Web platform that allows the users to calculate, visualize, analyze and compare the 3DFD from MRI images in the cloud. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  3. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  4. Hyperspectral data compression using a Wiener filter predictor

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.

    2013-09-01

    The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .

  5. Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions

    NASA Technical Reports Server (NTRS)

    Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina

    2002-01-01

    OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.

  6. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  7. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  8. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    NASA Astrophysics Data System (ADS)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  9. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  10. Fractal Analysis of Brain Blood Oxygenation Level Dependent (BOLD) Signals from Children with Mild Traumatic Brain Injury (mTBI).

    PubMed

    Dona, Olga; Noseworthy, Michael D; DeMatteo, Carol; Connolly, John F

    2017-01-01

    Conventional imaging techniques are unable to detect abnormalities in the brain following mild traumatic brain injury (mTBI). Yet patients with mTBI typically show delayed response on neuropsychological evaluation. Because fractal geometry represents complexity, we explored its utility in measuring temporal fluctuations of brain resting state blood oxygen level dependent (rs-BOLD) signal. We hypothesized that there could be a detectable difference in rs-BOLD signal complexity between healthy subjects and mTBI patients based on previous studies that associated reduction in signal complexity with disease. Fifteen subjects (13.4 ± 2.3 y/o) and 56 age-matched (13.5 ± 2.34 y/o) healthy controls were scanned using a GE Discovery MR750 3T MRI and 32-channel RF-coil. Axial FSPGR-3D images were used to prescribe rs-BOLD (TE/TR = 35/2000ms), acquired over 6 minutes. Motion correction was performed and anatomical and functional images were aligned and spatially warped to the N27 standard atlas. Fractal analysis, performed on grey matter, was done by estimating the Hurst exponent using de-trended fluctuation analysis and signal summation conversion methods. Voxel-wise fractal dimension (FD) was calculated for every subject in the control group to generate mean and standard deviation maps for regional Z-score analysis. Voxel-wise validation of FD normality across controls was confirmed, and non-Gaussian voxels (3.05% over the brain) were eliminated from subsequent analysis. For each mTBI patient, regions where Z-score values were at least 2 standard deviations away from the mean (i.e. where |Z| > 2.0) were identified. In individual patients the frequently affected regions were amygdala (p = 0.02), vermis(p = 0.03), caudate head (p = 0.04), hippocampus(p = 0.03), and hypothalamus(p = 0.04), all previously reported as dysfunctional after mTBI, but based on group analysis. It is well known that the brain is best modeled as a complex system. Therefore a measure of complexity using rs-BOLD signal FD could provide an additional method to grade and monitor mTBI. Furthermore, this approach can be personalized thus providing unique patient specific assessment.

  11. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    NASA Technical Reports Server (NTRS)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  12. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  13. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  14. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  15. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  16. Detecting Lung Diseases from Exhaled Aerosols: Non-Invasive Lung Diagnosis Using Fractal Analysis and SVM Classification

    PubMed Central

    Xi, Jinxiang; Zhao, Weizhong; Yuan, Jiayao Eddie; Kim, JongWon; Si, Xiuhua; Xu, Xiaowei

    2015-01-01

    Background Each lung structure exhales a unique pattern of aerosols, which can be used to detect and monitor lung diseases non-invasively. The challenges are accurately interpreting the exhaled aerosol fingerprints and quantitatively correlating them to the lung diseases. Objective and Methods In this study, we presented a paradigm of an exhaled aerosol test that addresses the above two challenges and is promising to detect the site and severity of lung diseases. This paradigm consists of two steps: image feature extraction using sub-regional fractal analysis and data classification using a support vector machine (SVM). Numerical experiments were conducted to evaluate the feasibility of the breath test in four asthmatic lung models. A high-fidelity image-CFD approach was employed to compute the exhaled aerosol patterns under different disease conditions. Findings By employing the 10-fold cross-validation method, we achieved 100% classification accuracy among four asthmatic models using an ideal 108-sample dataset and 99.1% accuracy using a more realistic 324-sample dataset. The fractal-SVM classifier has been shown to be robust, highly sensitive to structural variations, and inherently suitable for investigating aerosol-disease correlations. Conclusion For the first time, this study quantitatively linked the exhaled aerosol patterns with their underlying diseases and set the stage for the development of a computer-aided diagnostic system for non-invasive detection of obstructive respiratory diseases. PMID:26422016

  17. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  18. Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique

    DTIC Science & Technology

    2013-05-01

    Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  19. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  20. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  1. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  2. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  3. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  4. Morphology and Fractal Characterization of Multiscale Pore Structures for Organic-Rich Lacustrine Shale Reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Wu, Caifang; Zhu, Yanming; Chen, Shangbin; Liu, Shimin; Zhang, Rui

    Lacustrine shale gas has received considerable attention and has been playing an important role in unconventional natural gas production in China. In this study, multiple techniques, including total organic carbon (TOC) analysis, X-ray diffraction (XRD) analysis, field emission scanning electron microscopy (FE-SEM), helium pycnometry and low-pressure N2 adsorption have been applied to characterize the pore structure of lacustrine shale of Upper Triassic Yanchang Formation from the Ordos Basin. The results show that organic matter (OM) pores are the most important type dominating the pore system, while interparticle (interP) pores, intraparticle (intraP) and microfractures are also usually observed between or within different minerals. The shapes of OM pores are less complex compared with the other two pore types based on the Image-Pro Plus software analysis. In addition, the specific surface area ranges from 2.76m2/g to 10.26m2/g and the pore volume varies between 0.52m3/100g and 1.31m3/100g. Two fractal dimensions D1 and D2 were calculated using Frenkel-Halsey-Hill (FHH) method, with D1 varying between 2.510 and 2.632, and D2 varying between 2.617 and 2.814. Further investigation indicates that the fractal dimensions exhibit positive correlations with TOC contents, whereas there is no definite relationship observed between fractal dimensions and clay minerals. Meanwhile, the fractal dimensions increase with the increase in specific surface area, and is negatively correlated with the pore size.

  5. Definition of fractal topography to essential understanding of scale-invariance

    NASA Astrophysics Data System (ADS)

    Jin, Yi; Wu, Ying; Li, Hui; Zhao, Mengyu; Pan, Jienan

    2017-04-01

    Fractal behavior is scale-invariant and widely characterized by fractal dimension. However, the cor-respondence between them is that fractal behavior uniquely determines a fractal dimension while a fractal dimension can be related to many possible fractal behaviors. Therefore, fractal behavior is independent of the fractal generator and its geometries, spatial pattern, and statistical properties in addition to scale. To mathematically describe fractal behavior, we propose a novel concept of fractal topography defined by two scale-invariant parameters, scaling lacunarity (P) and scaling coverage (F). The scaling lacunarity is defined as the scale ratio between two successive fractal generators, whereas the scaling coverage is defined as the number ratio between them. Consequently, a strictly scale-invariant definition for self-similar fractals can be derived as D = log F /log P. To reflect the direction-dependence of fractal behaviors, we introduce another parameter Hxy, a general Hurst exponent, which is analytically expressed by Hxy = log Px/log Py where Px and Py are the scaling lacunarities in the x and y directions, respectively. Thus, a unified definition of fractal dimension is proposed for arbitrary self-similar and self-affine fractals by averaging the fractal dimensions of all directions in a d-dimensional space, which . Our definitions provide a theoretical, mechanistic basis for understanding the essentials of the scale-invariant property that reduces the complexity of modeling fractals.

  6. Advances in the Quantitative Characterization of the Shape of Ash-Sized Pyroclast Populations: Fractal Analyses Coupled to Micro- and Nano-Computed Tomography Techniques

    NASA Astrophysics Data System (ADS)

    Rausch, J.; Vonlanthen, P.; Grobety, B. H.

    2014-12-01

    The quantification of shape parameters in pyroclasts is fundamental to infer the dominant type of magma fragmentation (magmatic vs. phreatomagmatic), as well as the behavior of volcanic plumes and clouds in the atmosphere. In a case study aiming at reconstructing the fragmentation mechanisms triggering maar eruptions in two geologically and compositionally distinctive volcanic fields (West and East Eifel, Germany), the shapes of a large number of ash particle contours obtained from SEM images were analyzed by a dilation-based fractal method. Volcanic particle contours are pseudo-fractals showing mostly two distinct slopes in Richardson plots related to the fractal dimensions D1 (small-scale "textural" dimension) and D2 (large-scale "morphological" dimension). The validity of the data obtained from 2D sections was tested by analysing SEM micro-CT slices of one particle cut in different orientations and positions. Results for West Eifel maar particles yield large D1 values (> 1.023), resembling typical values of magmatic particles, which are characterized by a complex shape, especially at small scales. In contrast, the D1 values of ash particles from one East Eifel maar deposit are much smaller, coinciding with the fractal dimensions obtained from phreatomagmatic end-member particles. These quantitative morphological analyses suggest that the studied maar eruptions were triggered by two different fragmentation processes: phreatomagmatic in the East Eifel and magmatic in the West Eifel. The application of fractal analysis to quantitatively characterize the shape of pyroclasts and the linking of fractal dimensions to specific fragmentation processes has turned out to be a very promising tool for studying the fragmentation history of any volcanic eruption. The next step is to extend morphological analysis of volcanic particles to 3 dimensions. SEM micro-CT, already applied in this study, offers the required resolution, but is not suitable for the analysis of a large number of particles. Newly released nano CT-scanners, however, allows the simultaneous analysis of a statistically relevant number of particles (in the hundreds range). Preliminary results of a first trial will be presented.

  7. Is the co-seismic slip distribution fractal?

    NASA Astrophysics Data System (ADS)

    Milliner, Christopher; Sammis, Charles; Allam, Amir; Dolan, James

    2015-04-01

    Co-seismic along-strike slip heterogeneity is widely observed for many surface-rupturing earthquakes as revealed by field and high-resolution geodetic methods. However, this co-seismic slip variability is currently a poorly understood phenomenon. Key unanswered questions include: What are the characteristics and underlying causes of along-strike slip variability? Do the properties of slip variability change from fault-to-fault, along-strike or at different scales? We cross-correlate optical, pre- and post-event air photos using the program COSI-Corr to measure the near-field, surface deformation pattern of the 1992 Mw 7.3 Landers and 1999 Mw 7.1 Hector Mine earthquakes in high-resolution. We produce the co-seismic slip profiles of both events from over 1,000 displacement measurements and observe consistent along-strike slip variability. Although the observed slip heterogeneity seems apparently complex and disordered, a spectral analysis reveals that the slip distributions are indeed self-affine fractal i.e., slip exhibits a consistent degree of irregularity at all observable length scales, with a 'short-memory' and is not random. We find a fractal dimension of 1.58 and 1.75 for the Landers and Hector Mine earthquakes, respectively, indicating that slip is more heterogeneous for the Hector Mine event. Fractal slip is consistent with both dynamic and quasi-static numerical simulations that use non-planar faults, which in turn causes heterogeneous along-strike stress, and we attribute the observed fractal slip to fault surfaces of fractal roughness. As fault surfaces are known to smooth over geologic time due to abrasional wear and fracturing, we also test whether the fractal properties of slip distributions alters between earthquakes from immature to mature fault systems. We will present results that test this hypothesis by using the optical image correlation technique to measure historic, co-seismic slip distributions of earthquakes from structurally mature, large cumulative displacement faults and compare these slip distributions to those from immature fault systems. Our results have fundamental implications for an understanding of slip heterogeneity and the behavior of the rupture process.

  8. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  9. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  10. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.

  11. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  12. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  13. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  14. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  15. Automated selection of trabecular bone regions in knee radiographs.

    PubMed

    Podsiadlo, P; Wolski, M; Stachowiak, G W

    2008-05-01

    Osteoarthritic (OA) changes in knee joints can be assessed by analyzing the structure of trabecular bone (TB) in the tibia. This analysis is performed on TB regions selected manually by a human operator on x-ray images. Manual selection is time-consuming, tedious, and expensive. Even if a radiologist expert or highly trained person is available to select regions, high inter- and intraobserver variabilities are still possible. A fully automated image segmentation method was, therefore, developed to select the bone regions for numerical analyses of changes in bone structures. The newly developed method consists of image preprocessing, delineation of cortical bone plates (active shape model), and location of regions of interest (ROI). The method was trained on an independent set of 40 x-ray images. Automatically selected regions were compared to the "gold standard" that contains ROIs selected manually by a radiologist expert on 132 x-ray images. All images were acquired from subjects locked in a standardized standing position using a radiography rig. The size of each ROI is 12.8 x 12.8 mm. The automated method results showed a good agreement with the gold standard [similarity index (SI) = 0.83 (medial) and 0.81 (lateral) and the offset =[-1.78, 1.27]x[-0.65,0.26] mm (medial) and [-2.15, 1.59]x[-0.58, 0.52] mm (lateral)]. Bland and Altman plots were constructed for fractal signatures, and changes of fractal dimensions (FD) to region offsets calculated between the gold standard and automatically selected regions were calculated. The plots showed a random scatter and the 95% confidence intervals were (-0.006, 0.008) and (-0.001, 0.011). The changes of FDs to region offsets were less than 0.035. Previous studies showed that differences in FDs between non-OA and OA bone regions were greater than 0.05. ROIs were also selected by a second radiologist and then evaluated. Results indicated that the newly developed method could replace a human operator and produces bone regions with an accuracy that is sufficient for fractal analyses of bone texture.

  16. Effect of data compression on diagnostic accuracy in digital hand and chest radiography

    NASA Astrophysics Data System (ADS)

    Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita

    1992-05-01

    Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.

  17. Fractals: To Know, to Do, to Simulate.

    ERIC Educational Resources Information Center

    Talanquer, Vicente; Irazoque, Glinda

    1993-01-01

    Discusses the development of fractal theory and suggests fractal aggregates as an attractive alternative for introducing fractal concepts. Describes methods for producing metallic fractals and a computer simulation for drawing fractals. (MVL)

  18. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  19. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  20. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  1. JPEG vs. JPEG 2000: an objective comparison of image encoding quality

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan

    2004-11-01

    This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

  2. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  3. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  4. Psychophysical Comparisons in Image Compression Algorithms.

    DTIC Science & Technology

    1999-03-01

    Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March

  5. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  6. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  7. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  8. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  9. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  10. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  11. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  12. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  13. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  14. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  15. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  16. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  17. Multifractal spectrum and lacunarity as measures of complexity of osseointegration.

    PubMed

    de Souza Santos, Daniel; Dos Santos, Leonardo Cavalcanti Bezerra; de Albuquerque Tavares Carvalho, Alessandra; Leão, Jair Carneiro; Delrieux, Claudio; Stosic, Tatijana; Stosic, Borko

    2016-07-01

    The goal of this study is to contribute to a better quantitative description of the early stages of osseointegration, by application of fractal, multifractal, and lacunarity analysis. Fractal, multifractal, and lacunarity analysis are performed on scanning electron microscopy (SEM) images of titanium implants that were first subjected to different treatment combinations of i) sand blasting, ii) acid etching, and iii) exposition to calcium phosphate, and were then submersed in a simulated body fluid (SBF) for 30 days. All the three numerical techniques are applied to the implant SEM images before and after SBF immersion, in order to provide a comprehensive set of common quantitative descriptors. It is found that implants subjected to different physicochemical treatments before submersion in SBF exhibit a rather similar level of complexity, while the great variety of crystal forms after SBF submersion reveals rather different quantitative measures (reflecting complexity), for different treatments. In particular, it is found that acid treatment, in most combinations with the other considered treatments, leads to a higher fractal dimension (more uniform distribution of crystals), lower lacunarity (lesser variation in gap sizes), and narrowing of the multifractal spectrum (smaller fluctuations on different scales). The current quantitative description has shown the capacity to capture the main features of complex images of implant surfaces, for several different treatments. Such quantitative description should provide a fundamental tool for future large scale systematic studies, considering the large variety of possible implant treatments and their combinations. Quantitative description of early stages of osseointegration on titanium implants with different treatments should help develop a better understanding of this phenomenon, in general, and provide basis for further systematic experimental studies. Clinical practice should benefit from such studies in the long term, by more ready access to implants of higher quality.

  18. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  19. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  20. An Efficient, Lossless Database for Storing and Transmitting Medical Images

    NASA Technical Reports Server (NTRS)

    Fenstermacher, Marc J.

    1998-01-01

    This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).

  1. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  2. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  3. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  4. Fractal Electronic Circuits Assembled From Nanoclusters

    NASA Astrophysics Data System (ADS)

    Fairbanks, M. S.; McCarthy, D.; Taylor, R. P.; Brown, S. A.

    2009-07-01

    Many patterns in nature can be described using fractal geometry. The effect of this fractal character is an array of properties that can include high internal connectivity, high dispersivity, and enhanced surface area to volume ratios. These properties are often desirable in applications and, consequently, fractal geometry is increasingly employed in technologies ranging from antenna to storm barriers. In this paper, we explore the application of fractal geometry to electrical circuits, inspired by the pervasive fractal structure of neurons in the brain. We show that, under appropriate growth conditions, nanoclusters of Sb form into islands on atomically flat substrates via a process close to diffusion-limited aggregation (DLA), establishing fractal islands that will form the basis of our fractal circuits. We perform fractal analysis of the islands to determine the spatial scaling properties (characterized by the fractal dimension, D) of the proposed circuits and demonstrate how varying growth conditions can affect D. We discuss fabrication approaches for establishing electrical contact to the fractal islands. Finally, we present fractal circuit simulations, which show that the fractal character of the circuit translates into novel, non-linear conduction properties determined by the circuit's D value.

  5. Observer detection of image degradation caused by irreversible data compression processes

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  6. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  7. Fractal dimension as an index of brain cortical changes throughout life.

    PubMed

    Kalmanti, Elina; Maris, Thomas G

    2007-01-01

    The fractal dimension (FD) of the cerebral cortex was measured in 93 individuals, aged from 3 months to 78 years, with normal brain MRI's in order to compare the convolutions of the cerebral cortex between genders and age groups. Image J, an image processing program, was used to skeletonize cerebral cortex and the box counting method applied. FDs on slices taken from left and right hemispheres were calculated. Our results showed a significant degree of lateralization in the left hemisphere. It appears that basal ganglia development, mainly in the left hemisphere, is heavily dependent upon age until puberty. In addition, both left and right cortex development equally depends on age until puberty, while the corresponding right hemisphere convolutions continue to develop until a later stage. An increased developmental activity appears between the ages of 1 and 15 years, indicating a significant brain remodelling during childhood and adolescence. In infancy, only changes in basal ganglia are observed, while the right hemisphere continues to remodel in adulthood.

  8. Morphological characterization of diesel soot agglomerates based on the Beer-Lambert law

    NASA Astrophysics Data System (ADS)

    Lapuerta, Magín; Martos, Francisco J.; José Expósito, Juan

    2013-03-01

    A new method is proposed for the determination of the number of primary particles composing soot agglomerates emitted from diesel engines as well as their individual fractal dimension. The method is based on the Beer-Lambert law and it is applied to micro-photographs taken in high resolution transmission electron microscopy. Differences in the grey levels of the images lead to a more accurate estimation of the geometry of the agglomerate (in this case radius of gyration) than other methods based exclusively on the planar projections of the agglomerates. The method was validated by applying it to different images of the same agglomerate observed from different angles of incidence, and proving that the effect of the angle of incidence is minor, contrary to other methods. Finally, the comparisons with other methods showed that the size, number of primary particles and fractal dimension (the latter depending on the particle size) are usually underestimated when only planar projections of the agglomerates are considered.

  9. A fractal image analysis methodology for heat damage inspection in carbon fiber reinforced composites

    NASA Astrophysics Data System (ADS)

    Haridas, Aswin; Crivoi, Alexandru; Prabhathan, P.; Chan, Kelvin; Murukeshan, V. M.

    2017-06-01

    The use of carbon fiber-reinforced polymer (CFRP) composite materials in the aerospace industry have far improved the load carrying properties and the design flexibility of aircraft structures. A high strength to weight ratio, low thermal conductivity, and a low thermal expansion coefficient gives it an edge for applications demanding stringent loading conditions. Specifically, this paper focuses on the behavior of CFRP composites under stringent thermal loads. The properties of composites are largely affected by external thermal loads, especially when the loads are beyond the glass temperature, Tg, of the composite. Beyond this, the composites are subject to prominent changes in mechanical and thermal properties which may further lead to material decomposition. Furthermore, thermal damage formation being chaotic, a strict dimension cannot be associated with the formed damage. In this context, this paper focuses on comparing multiple speckle image analysis algorithms to effectively characterize the formed thermal damages on the CFRP specimen. This would provide us with a fast method for quantifying the extent of heat damage in carbon composites, thus reducing the required time for inspection. The image analysis methods used for the comparison include fractal dimensional analysis of the formed speckle pattern and analysis of number and size of various connecting elements in the binary image.

  10. Effect of frying temperature and time on image characterizations of pellet snacks.

    PubMed

    Mohammadi Moghaddam, Toktam; BahramParvar, Maryam; Razavi, Seyed M A

    2015-05-01

    The development of non-destructive methods for the evaluation of food properties has important advantages for the food processing industries. So, the aim of this study was to evaluate the effects of frying temperature (150, 170, and 190 °C) and time (0.5, 1.5, 2.5, 3.5 and 4.5 min) on image properties (L*, a* and b*, fractal dimension, correlation, entropy, contrast and homogeneity) of pellet snacks. Textures were computed separately for eight channels (RGB, R, G, B, U, V, H and S). Enhancing the frying time from 0.5 min to 2.5 min increased the fractal dimension; but its increase from 2.5 min to 4.5 min could not expand the samples. Then, the highest volume of pellet snacks was observed at 2.5 min. Features derived from the image texture contained better information than color features. The best result was for U channel which showed that increasing the frying time increased the contrast, entropy and correlation. Developing the frying temperature up to 170 °C decreased contrast, entropy and correlation of images; however these factors were increased when frying temperature was 190 °C. These results were invert for homogeneity.

  11. Music and fractals

    NASA Astrophysics Data System (ADS)

    Wuorinen, Charles

    2015-03-01

    Any of the arts may produce exemplars that have fractal characteristics. There may be fractal painting, fractal poetry, and the like. But these will always be specific instances, not necessarily displaying intrinsic properties of the art-medium itself. Only music, I believe, of all the arts possesses an intrinsically fractal character, so that its very nature is fractally determined. Thus, it is reasonable to assert that any instance of music is fractal...

  12. Flow field topology of submerged jets with fractal generated turbulence

    NASA Astrophysics Data System (ADS)

    Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso

    2015-11-01

    Fractal grids (FGs) have been recently an object of numerous investigations due to the interesting capability of generating turbulence at multiple scales, thus paving the way to tune mixing and scalar transport. The flow field topology of a turbulent air jet equipped with a square FG is investigated by means of planar and volumetric particle image velocimetry. The comparison with the well-known features of a round jet without turbulence generators is also presented. The Reynolds number based on the nozzle exit section diameter for all the experiments is set to about 15 000. It is demonstrated that the presence of the grid enhances the entrainment rate and, as a consequence, the scalar transfer of the jet. Moreover, due to the effect of the jet external shear layer on the wake shed by the grid bars, the turbulence production region past the grid is significantly shortened with respect to the documented behavior of fractal grids in free-shear conditions. The organization of the large coherent structures in the FG case is also analyzed and discussed. Differently from the well-known generation of toroidal vortices due to the growth of azimuthal disturbances within the jet shear layer, the fractal grid introduces cross-wise disturbs which produce streamwise vortices; these structures, although characterized by a lower energy content, have a deeper streamwise penetration than the ring vortices, thus enhancing the entrainment process.

  13. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  14. A generalized Benford's law for JPEG coefficients and its applications in image forensics

    NASA Astrophysics Data System (ADS)

    Fu, Dongdong; Shi, Yun Q.; Su, Wei

    2007-02-01

    In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

  15. Effective degrees of freedom of a random walk on a fractal

    NASA Astrophysics Data System (ADS)

    Balankin, Alexander S.

    2015-12-01

    We argue that a non-Markovian random walk on a fractal can be treated as a Markovian process in a fractional dimensional space with a suitable metric. This allows us to define the fractional dimensional space allied to the fractal as the ν -dimensional space Fν equipped with the metric induced by the fractal topology. The relation between the number of effective spatial degrees of freedom of walkers on the fractal (ν ) and fractal dimensionalities is deduced. The intrinsic time of random walk in Fν is inferred. The Laplacian operator in Fν is constructed. This allows us to map physical problems on fractals into the corresponding problems in Fν. In this way, essential features of physics on fractals are revealed. Particularly, subdiffusion on path-connected fractals is elucidated. The Coulomb potential of a point charge on a fractal embedded in the Euclidean space is derived. Intriguing attributes of some types of fractals are highlighted.

  16. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images.

    PubMed

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under different compression conditions.

  17. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  18. Nonlinear pulse compression in pulse-inversion fundamental imaging.

    PubMed

    Cheng, Yun-Chien; Shen, Che-Chou; Li, Pai-Chi

    2007-04-01

    Coded excitation can be applied in ultrasound contrast agent imaging to enhance the signal-to-noise ratio with minimal destruction of the microbubbles. Although the axial resolution is usually compromised by the requirement for a long coded transmit waveforms, this can be restored by using a compression filter to compress the received echo. However, nonlinear responses from microbubbles may cause difficulties in pulse compression and result in severe range side-lobe artifacts, particularly in pulse-inversion-based (PI) fundamental imaging. The efficacy of pulse compression in nonlinear contrast imaging was evaluated by investigating several factors relevant to PI fundamental generation using both in-vitro experiments and simulations. The results indicate that the acoustic pressure and the bubble size can alter the nonlinear characteristics of microbubbles and change the performance of the compression filter. When nonlinear responses from contrast agents are enhanced by using a higher acoustic pressure or when more microbubbles are near the resonance size of the transmit frequency, higher range side lobes are produced in both linear imaging and PI fundamental imaging. On the other hand, contrast detection in PI fundamental imaging significantly depends on the magnitude of the nonlinear responses of the bubbles and thus the resultant contrast-to-tissue ratio (CTR) still increases with acoustic pressure and the nonlinear resonance of microbubbles. It should be noted, however, that the CTR in PI fundamental imaging after compression is consistently lower than that before compression due to obvious side-lobe artifacts. Therefore, the use of coded excitation is not beneficial in PI fundamental contrast detection.

  19. An Approach to Study Elastic Vibrations of Fractal Cylinders

    NASA Astrophysics Data System (ADS)

    Steinberg, Lev; Zepeda, Mario

    2016-11-01

    This paper presents our study of dynamics of fractal solids. Concepts of fractal continuum and time had been used in definitions of a fractal body deformation and motion, formulation of conservation of mass, balance of momentum, and constitutive relationships. A linearized model, which was written in terms of fractal time and spatial derivatives, has been employed to study the elastic vibrations of fractal circular cylinders. Fractal differential equations of torsional, longitudinal and transverse fractal wave equations have been obtained and solution properties such as size and time dependence have been revealed.

  20. Elasticity of fractal materials using the continuum model with non-integer dimensional space

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-01-01

    Using a generalization of vector calculus for space with non-integer dimension, we consider elastic properties of fractal materials. Fractal materials are described by continuum models with non-integer dimensional space. A generalization of elasticity equations for non-integer dimensional space, and its solutions for the equilibrium case of fractal materials are suggested. Elasticity problems for fractal hollow ball and cylindrical fractal elastic pipe with inside and outside pressures, for rotating cylindrical fractal pipe, for gradient elasticity and thermoelasticity of fractal materials are solved.

  1. Fractal vector optical fields.

    PubMed

    Pan, Yue; Gao, Xu-Zhen; Cai, Meng-Qiang; Zhang, Guan-Lin; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2016-07-15

    We introduce the concept of a fractal, which provides an alternative approach for flexibly engineering the optical fields and their focal fields. We propose, design, and create a new family of optical fields-fractal vector optical fields, which build a bridge between the fractal and vector optical fields. The fractal vector optical fields have polarization states exhibiting fractal geometry, and may also involve the phase and/or amplitude simultaneously. The results reveal that the focal fields exhibit self-similarity, and the hierarchy of the fractal has the "weeding" role. The fractal can be used to engineer the focal field.

  2. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  3. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  4. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    DTIC Science & Technology

    2013-04-01

    Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [20] A. Gurbuz, J. McClellan, and W. Scott, Jr., "Compressive sensing for subsurface imaging using...SciTech Publishing, 2010, pp. 922- 938. [45] A. C. Gurbuz, J. H. McClellan, and W. R. Scott, Jr., "Compressive sensing for subsurface imaging using

  5. Human physiological benefits of viewing nature: EEG responses to exact and statistical fractal patterns.

    PubMed

    Hagerhall, C M; Laike, T; Küller, M; Marcheschi, E; Boydston, C; Taylor, R P

    2015-01-01

    Psychological and physiological benefits of viewing nature have been extensively studied for some time. More recently it has been suggested that some of these positive effects can be explained by nature's fractal properties. Virtually all studies on human responses to fractals have used stimuli that represent the specific form of fractal geometry found in nature, i.e. statistical fractals, as opposed to fractal patterns which repeat exactly at different scales. This raises the question of whether human responses like preference and relaxation are being driven by fractal geometry in general or by the specific form of fractal geometry found in nature. In this study we consider both types of fractals (statistical and exact) and morph one type into the other. Based on the Koch curve, nine visual stimuli were produced in which curves of three different fractal dimensions evolve gradually from an exact to a statistical fractal. The patterns were shown for one minute each to thirty-five subjects while qEEG was continuously recorded. The results showed that the responses to statistical and exact fractals differ, and that the natural form of the fractal is important for inducing alpha responses, an indicator of a wakefully relaxed state and internalized attention.

  6. Compression of high-density EMG signals for trapezius and gastrocnemius muscles.

    PubMed

    Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto

    2014-03-10

    New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.

  7. Compression of high-density EMG signals for trapezius and gastrocnemius muscles

    PubMed Central

    2014-01-01

    Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604

  8. JPEG2000 and dissemination of cultural heritage over the Internet.

    PubMed

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  9. [Quantitative analysis method based on fractal theory for medical imaging of normal brain development in infants].

    PubMed

    Li, Heheng; Luo, Liangping; Huang, Li

    2011-02-01

    The present paper is aimed to study the fractal spectrum of the cerebral computerized tomography in 158 normal infants of different age groups, based on the calculation of chaotic theory. The distribution range of neonatal period was 1.88-1.90 (mean = 1.8913 +/- 0.0064); It reached a stable condition at the level of 1.89-1.90 during 1-12 months old (mean = 1.8927 +/- 0.0045); The normal range of 1-2 years old infants was 1.86-1.90 (mean = 1.8863 +/- 4 0.0085); It kept the invariance of the quantitative value among 1.88-1.91(mean = 1.8958 +/- 0.0083) during 2-3 years of age. ANOVA indicated there's no significant difference between boys and girls (F = 0.243, P > 0.05), but the difference of age groups was significant (F = 8.947, P < 0.001). The fractal dimension of cerebral computerized tomography in normal infants computed by box methods was maintained at an efficient stability from 1.86 to 1.91. It indicated that there exit some attractor modes in pediatric brain development.

  10. Technology study of quantum remote sensing imaging

    NASA Astrophysics Data System (ADS)

    Bi, Siwen; Lin, Xuling; Yang, Song; Wu, Zhiqiang

    2016-02-01

    According to remote sensing science and technology development and application requirements, quantum remote sensing is proposed. First on the background of quantum remote sensing, quantum remote sensing theory, information mechanism, imaging experiments and prototype principle prototype research situation, related research at home and abroad are briefly introduced. Then we expounds compress operator of the quantum remote sensing radiation field and the basic principles of single-mode compression operator, quantum quantum light field of remote sensing image compression experiment preparation and optical imaging, the quantum remote sensing imaging principle prototype, Quantum remote sensing spaceborne active imaging technology is brought forward, mainly including quantum remote sensing spaceborne active imaging system composition and working principle, preparation and injection compression light active imaging device and quantum noise amplification device. Finally, the summary of quantum remote sensing research in the past 15 years work and future development are introduced.

  11. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  12. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  13. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  14. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  15. Analysis of Texture Using the Fractal Model

    NASA Technical Reports Server (NTRS)

    Navas, William; Espinosa, Ramon Vasquez

    1997-01-01

    Properties such as the fractal dimension (FD) can be used for feature extraction and classification of regions within an image. The FD measures the degree of roughness of a surface, so this number is used to characterize a particular region, in order to differentiate it from another. There are two basic approaches discussed in the literature to measure FD: the blanket method, and the box counting method. Both attempt to measure FD by estimating the change in surface area with respect to the change in resolution. We tested both methods but box counting resulted computationally faster and gave better results. Differential Box Counting (DBC) was used to segment a collage containing three textures. The FD is independent of directionality and brightness so five features were used derived from the original image to account for directionality and gray level biases. FD can not be measured on a point, so we use a window that slides across the image giving values of FD to the pixel on the center of the window. Windowing blurs the boundaries of adjacent classes, so an edge-preserving, feature-smoothing algorithm is used to improve classification within segments and to make the boundaries sharper. Segmentation using DBC was 90.8910 accurate.

  16. VESGEN Software for Mapping and Quantification of Vascular Regulators

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.

    2012-01-01

    VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.

  17. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  18. Iterated Function Systems in the Classroom

    ERIC Educational Resources Information Center

    Waiveris, Charles

    2007-01-01

    The title may appear daunting, but the exercises, which can be presented to students from middle school to graduate school, are not. The exercises center on creating fractal images in the xy-plane with free. easy-to-use software and questions appropriate to the level of the student.

  19. Combining local scaling and global methods to detect soil pore space

    NASA Astrophysics Data System (ADS)

    Martin-Sotoca, Juan Jose; Saa-Requejo, Antonio; Grau, Juan B.; Tarquis, Ana M.

    2017-04-01

    The characterization of the spatial distribution of soil pore structures is essential to obtain different parameters that will influence in several models related to water flow and/or microbial growth processes. The first step in pore structure characterization is obtaining soil images that best approximate reality. Over the last decade, major technological advances in X-ray computed tomography (CT) have allowed for the investigation and reconstruction of natural porous media architectures at very fine scales. The subsequent step is delimiting the pore structure (pore space) from the CT soil images applying a thresholding. Many times we could find CT-scan images that show low contrast at the solid-void interface that difficult this step. Different delimitation methods can result in different spatial distributions of pores influencing the parameters used in the models. Recently, new local segmentation method using local greyscale value (GV) concentration variabilities, based on fractal concepts, has been presented. This method creates singularity maps to measure the GV concentration at each point. The C-A method was combined with the singularity map approach (Singularity-CA method) to define local thresholds that can be applied to binarize CT images. Comparing this method with classical methods, such as Otsu and Maximum Entropy, we observed that more pores can be detected mainly due to its ability to amplify anomalous concentrations. However, it delineated many small pores that were incorrect. In this work, we present an improve version of Singularity-CA method that avoid this problem basically combining it with the global classical methods. References Martín-Sotoca, J.J., A. Saa-Requejo, J.B. Grau, A.M. Tarquis. New segmentation method based on fractal properties using singularity maps. Geoderma, 287, 40-53, 2017. Martín-Sotoca, J.J, A. Saa-Requejo, J.B. Grau, A.M. Tarquis. Local 3D segmentation of soil pore space based on fractal properties using singularity maps. Geoderma, http://dx.doi.org/10.1016/j.geoderma.2016.11.029. Torre, Iván G., Juan C. Losada and A.M. Tarquis. Multiscaling properties of soil images. Biosystems Engineering, http://dx.doi.org/10.1016/j.biosystemseng.2016.11.006.

  20. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  1. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  2. Lossless Compression of Classification-Map Data

    NASA Technical Reports Server (NTRS)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  3. Effective degrees of freedom of a random walk on a fractal.

    PubMed

    Balankin, Alexander S

    2015-12-01

    We argue that a non-Markovian random walk on a fractal can be treated as a Markovian process in a fractional dimensional space with a suitable metric. This allows us to define the fractional dimensional space allied to the fractal as the ν-dimensional space F(ν) equipped with the metric induced by the fractal topology. The relation between the number of effective spatial degrees of freedom of walkers on the fractal (ν) and fractal dimensionalities is deduced. The intrinsic time of random walk in F(ν) is inferred. The Laplacian operator in F(ν) is constructed. This allows us to map physical problems on fractals into the corresponding problems in F(ν). In this way, essential features of physics on fractals are revealed. Particularly, subdiffusion on path-connected fractals is elucidated. The Coulomb potential of a point charge on a fractal embedded in the Euclidean space is derived. Intriguing attributes of some types of fractals are highlighted.

  4. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  5. Breast compression in mammography: how much is enough?

    PubMed

    Poulos, Ann; McLean, Donald; Rickard, Mary; Heard, Robert

    2003-06-01

    The amount of breast compression that is applied during mammography potentially influences image quality and the discomfort experienced. The aim of this study was to determine the relationship between applied compression force, breast thickness, reported discomfort and image quality. Participants were women attending routine breast screening by mammography at BreastScreen New South Wales Central and Eastern Sydney. During the mammographic procedure, an 'extra' craniocaudal (CC) film was taken at a reduced level of compression ranging from 10 to 30 Newtons. Breast thickness measurements were recorded for both the normal and the extra CC film. Details of discomfort experienced, cup size, menstrual status, existing breast pain and breast problems were also recorded. Radiologists were asked to compare the image quality of the normal and manipulated film. The results indicated that 24% of women did not experience a difference in thickness when the compression was reduced. This is an important new finding because the aim of breast compression is to reduce breast thickness. If breast thickness is not reduced when compression force is applied then discomfort is increased with no benefit in image quality. This has implications for mammographic practice when determining how much breast compression is sufficient. Radiologists found a decrease in contrast resolution within the fatty area of the breast between the normal and the extra CC film, confirming a decrease in image quality due to insufficient applied compression force.

  6. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  7. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  8. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  9. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    PubMed

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. The wavelet/scalar quantization compression standard for digital fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  11. Mammographic compression in Asian women.

    PubMed

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  12. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  13. Aesthetic Responses to Exact Fractals Driven by Physical Complexity

    PubMed Central

    Bies, Alexander J.; Blanc-Goldhammer, Daryn R.; Boydston, Cooper R.; Taylor, Richard P.; Sereno, Margaret E.

    2016-01-01

    Fractals are physically complex due to their repetition of patterns at multiple size scales. Whereas the statistical characteristics of the patterns repeat for fractals found in natural objects, computers can generate patterns that repeat exactly. Are these exact fractals processed differently, visually and aesthetically, than their statistical counterparts? We investigated the human aesthetic response to the complexity of exact fractals by manipulating fractal dimensionality, symmetry, recursion, and the number of segments in the generator. Across two studies, a variety of fractal patterns were visually presented to human participants to determine the typical response to exact fractals. In the first study, we found that preference ratings for exact midpoint displacement fractals can be described by a linear trend with preference increasing as fractal dimension increases. For the majority of individuals, preference increased with dimension. We replicated these results for other exact fractal patterns in a second study. In the second study, we also tested the effects of symmetry and recursion by presenting asymmetric dragon fractals, symmetric dragon fractals, and Sierpinski carpets and Koch snowflakes, which have radial and mirror symmetry. We found a strong interaction among recursion, symmetry and fractal dimension. Specifically, at low levels of recursion, the presence of symmetry was enough to drive high preference ratings for patterns with moderate to high levels of fractal dimension. Most individuals required a much higher level of recursion to recover this level of preference in a pattern that lacked mirror or radial symmetry, while others were less discriminating. This suggests that exact fractals are processed differently than their statistical counterparts. We propose a set of four factors that influence complexity and preference judgments in fractals that may extend to other patterns: fractal dimension, recursion, symmetry and the number of segments in a pattern. Conceptualizations such as Berlyne’s and Redies’ theories of aesthetics also provide a suitable framework for interpretation of our data with respect to the individual differences that we detect. Future studies that incorporate physiological methods to measure the human aesthetic response to exact fractal patterns would further elucidate our responses to such timeless patterns. PMID:27242475

  14. Paradigms of Complexity: Fractals and Structures in the Sciences

    NASA Astrophysics Data System (ADS)

    Novak, Miroslav M.

    The Table of Contents for the book is as follows: * Preface * The Origin of Complexity (invited talk) * On the Existence of Spatially Uniform Scaling Laws in the Climate System * Multispectral Backscattering: A Fractal-Structure Probe * Small-Angle Multiple Scattering on a Fractal System of Point Scatterers * Symmetric Fractals Generated by Cellular Automata * Bispectra and Phase Correlations for Chaotic Dynamical Systems * Self-Organized Criticality Models of Neural Development * Altered Fractal and Irregular Heart Rate Behavior in Sick Fetuses * Extract Multiple Scaling in Long-Term Heart Rate Variability * A Semi-Continous Box Counting Method for Fractal Dimension Measurement of Short Single Dimension Temporal Signals - Preliminary Study * A Fractional Brownian Motion Model of Cracking * Self-Affine Scaling Studies on Fractography * Coarsening of Fractal Interfaces * A Fractal Model of Ocean Surface Superdiffusion * Stochastic Subsurface Flow and Transport in Fractal Fractal Conductivity Fields * Rendering Through Iterated Function Systems * The σ-Hull - The Hull Where Fractals Live - Calculating a Hull Bounded by Log Spirals to Solve the Inverse IFS-Problem by the Detected Orbits * On the Multifractal Properties of Passively Convected Scalar Fields * New Statistical Textural Transforms for Non-Stationary Signals: Application to Generalized Mutlifractal Analysis * Laplacian Growth of Parallel Needles: Their Mullins-Sekerka Instability * Entropy Dynamics Associated with Self-Organization * Fractal Properties in Economics (invited talk) * Fractal Approach to the Regional Seismic Event Discrimination Problem * Fractal and Topological Complexity of Radioactive Contamination * Pattern Selection: Nonsingular Saffman-Taylor Finger and Its Dynamic Evolution with Zero Surface Tension * A Family of Complex Wavelets for the Characterization of Singularities * Stabilization of Chaotic Amplitude Fluctuations in Multimode, Intracavity-Doubled Solid-State Lasers * Chaotic Dynamics of Elastic-Plastic Beams * The Riemann Non-Differentiable Function and Identities for the Gaussian Sums * Revealing the Multifractal Nature of Failure Sequence * The Fractal Nature of wood Revealed by Drying * Squaring the Circle: Diffusion Volume and Acoustic Behaviour of a Fractal Structure * Relationship Between Acupuncture Holographic Units and Fetus Development; Fractal Features of Two Acupuncture Holographic Unit Systems * The Fractal Properties of the Large-Scale Magnetic Fields on the Sun * Fractal Analysis of Tide Gauge Data * Author Index

  15. Multi-fractal detrended texture feature for brain tumor classification

    NASA Astrophysics Data System (ADS)

    Reza, Syed M. S.; Mays, Randall; Iftekharuddin, Khan M.

    2015-03-01

    We propose a novel non-invasive brain tumor type classification using Multi-fractal Detrended Fluctuation Analysis (MFDFA) [1] in structural magnetic resonance (MR) images. This preliminary work investigates the efficacy of the MFDFA features along with our novel texture feature known as multifractional Brownian motion (mBm) [2] in classifying (grading) brain tumors as High Grade (HG) and Low Grade (LG). Based on prior performance, Random Forest (RF) [3] is employed for tumor grading using two different datasets such as BRATS-2013 [4] and BRATS-2014 [5]. Quantitative scores such as precision, recall, accuracy are obtained using the confusion matrix. On an average 90% precision and 85% recall from the inter-dataset cross-validation confirm the efficacy of the proposed method.

  16. Broadband Focusing Acoustic Lens Based on Fractal Metamaterials

    PubMed Central

    Song, Gang Yong; Huang, Bei; Dong, Hui Yuan; Cheng, Qiang; Cui, Tie Jun

    2016-01-01

    Acoustic metamaterials are artificial structures which can manipulate sound waves through their unconventional effective properties. Different from the locally resonant elements proposed in earlier studies, we propose an alternate route to realize acoustic metamaterials with both low loss and large refractive indices. We describe a new kind of acoustic metamaterial element with the fractal geometry. Due to the self-similar properties of the proposed structure, broadband acoustic responses may arise within a broad frequency range, making it a good candidate for a number of applications, such as super-resolution imaging and acoustic tunneling. A flat acoustic lens is designed and experimentally verified using this approach, showing excellent focusing abilities from 2 kHz and 5 kHz in the measured results. PMID:27782216

  17. Effects of Gas Pressure on the Failure Characteristics of Coal

    NASA Astrophysics Data System (ADS)

    Xie, Guangxiang; Yin, Zhiqiang; Wang, Lei; Hu, Zuxiang; Zhu, Chuanqi

    2017-07-01

    Several experiments were conducted using self-developed equipment for visual gas-solid coupling mechanics. The raw coal specimens were stored in a container filled with gas (99% CH4) under different initial gas pressure conditions (0.0, 0.5, 1.0, and 1.5 MPa) for 24 h prior to testing. Then, the specimens were tested in a rock-testing machine, and the mechanical properties, surface deformation and failure modes were recorded using strain gauges, an acoustic emission (AE) system and a camera. An analysis of the fractals of fragments and dissipated energy was performed to understand the changes observed in the stress-strain and crack propagation behaviour of the gas-containing coal specimens. The results demonstrate that increased gas pressure leads to a reduction in the uniaxial compression strength (UCS) of gas-containing coal and the critical dilatancy stress. The AE, surface deformation and fractal analysis results show that the failure mode changes during the gas state. Interestingly, a higher initial gas pressure will cause the damaged cracks and failure of the gas-containing coal samples to become severe. The dissipated energy characteristic in the failure process of a gas-containing coal sample is analysed using a combination of fractal theory and energy principles. Using the theory of fracture mechanics, based on theoretical analyses and calculations, the stress intensity factor of crack tips increases as the gas pressure increases, which is the main cause of the reduction in the UCS and critical dilatancy stress and explains the influence of gas in coal failure. More serious failure is created in gas-containing coal under a high gas pressure and low exterior load.

  18. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  19. Compressive hyperspectral and multispectral imaging fusion

    NASA Astrophysics Data System (ADS)

    Espitia, Óscar; Castillo, Sergio; Arguello, Henry

    2016-05-01

    Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.

  20. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  1. Verifying the Dependence of Fractal Coefficients on Different Spatial Distributions

    NASA Astrophysics Data System (ADS)

    Gospodinov, Dragomir; Marekova, Elisaveta; Marinov, Alexander

    2010-01-01

    A fractal distribution requires that the number of objects larger than a specific size r has a power-law dependence on the size N(r) = C/rD∝r-D where D is the fractal dimension. Usually the correlation integral is calculated to estimate the correlation fractal dimension of epicentres. A `box-counting' procedure could also be applied giving the `capacity' fractal dimension. The fractal dimension can be an integer and then it is equivalent to a Euclidean dimension (it is zero of a point, one of a segment, of a square is two and of a cube is three). In general the fractal dimension is not an integer but a fractional dimension and there comes the origin of the term `fractal'. The use of a power-law to statistically describe a set of events or phenomena reveals the lack of a characteristic length scale, that is fractal objects are scale invariant. Scaling invariance and chaotic behavior constitute the base of a lot of natural hazards phenomena. Many studies of earthquakes reveal that their occurrence exhibits scale-invariant properties, so the fractal dimension can characterize them. It has first been confirmed that both aftershock rate decay in time and earthquake size distribution follow a power law. Recently many other earthquake distributions have been found to be scale-invariant. The spatial distribution of both regional seismicity and aftershocks show some fractal features. Earthquake spatial distributions are considered fractal, but indirectly. There are two possible models, which result in fractal earthquake distributions. The first model considers that a fractal distribution of faults leads to a fractal distribution of earthquakes, because each earthquake is characteristic of the fault on which it occurs. The second assumes that each fault has a fractal distribution of earthquakes. Observations strongly favour the first hypothesis. The fractal coefficients analysis provides some important advantages in examining earthquake spatial distribution, which are:—Simple way to quantify scale-invariant distributions of complex objects or phenomena by a small number of parameters.—It is becoming evident that the applicability of fractal distributions to geological problems could have a more fundamental basis. Chaotic behaviour could underlay the geotectonic processes and the applicable statistics could often be fractal. The application of fractal distribution analysis has, however, some specific aspects. It is usually difficult to present an adequate interpretation of the obtained values of fractal coefficients for earthquake epicenter or hypocenter distributions. That is why in this paper we aimed at other goals—to verify how a fractal coefficient depends on different spatial distributions. We simulated earthquake spatial data by generating randomly points first in a 3D space - cube, then in a parallelepiped, diminishing one of its sides. We then continued this procedure in 2D and 1D space. For each simulated data set we calculated the points' fractal coefficient (correlation fractal dimension of epicentres) and then checked for correlation between the coefficients values and the type of spatial distribution. In that way one can obtain a set of standard fractal coefficients' values for varying spatial distributions. These then can be used when real earthquake data is analyzed by comparing the real data coefficients values to the standard fractal coefficients. Such an approach can help in interpreting the fractal analysis results through different types of spatial distributions.

  2. Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and wavelets.

    PubMed

    Cheremkhin, Pavel A; Kurbatova, Ekaterina A

    2018-01-01

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

  3. 3-D in vivo brain tumor geometry study by scaling analysis

    NASA Astrophysics Data System (ADS)

    Torres Hoyos, F.; Martín-Landrove, M.

    2012-02-01

    A new method, based on scaling analysis, is used to calculate fractal dimension and local roughness exponents to characterize in vivo 3-D tumor growth in the brain. Image acquisition was made according to the standard protocol used for brain radiotherapy and radiosurgery, i.e., axial, coronal and sagittal magnetic resonance T1-weighted images, and comprising the brain volume for image registration. Image segmentation was performed by the application of the k-means procedure upon contrasted images. We analyzed glioblastomas, astrocytomas, metastases and benign brain tumors. The results show significant variations of the parameters depending on the tumor stage and histological origin.

  4. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm concerns include the decision of which algorithms to implement as well as the problem of optimal setting of adjustable parameters. It will take imaging vendors several years to work through these challenges and provide solutions for a wide range of applications.

  5. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  6. 3D Porous Architecture of Stacks of β-TCP Granules Compared with That of Trabecular Bone: A microCT, Vector Analysis, and Compression Study.

    PubMed

    Chappard, Daniel; Terranova, Lisa; Mallet, Romain; Mercier, Philippe

    2015-01-01

    The 3D arrangement of porous granular biomaterials usable to fill bone defects has received little study. Granular biomaterials occupy 3D space when packed together in a manner that creates a porosity suitable for the invasion of vascular and bone cells. Granules of beta-tricalcium phosphate (β-TCP) were prepared with either 12.5 or 25 g of β-TCP powder in the same volume of slurry. When the granules were placed in a test tube, this produced 3D stacks with a high (HP) or low porosity (LP), respectively. Stacks of granules mimic the filling of a bone defect by a surgeon. The aim of this study was to compare the porosity of stacks of β-TCP granules with that of cores of trabecular bone. Biomechanical compression tests were done on the granules stacks. Bone cylinders were prepared from calf tibia plateau, constituted high-density (HD) blocks. Low-density (LD) blocks were harvested from aged cadaver tibias. Microcomputed tomography was used on the β-TCP granule stacks and the trabecular bone cores to determine porosity and specific surface. A vector-projection algorithm was used to image porosity employing a frontal plane image, which was constructed line by line from all images of a microCT stack. Stacks of HP granules had porosity (75.3 ± 0.4%) and fractal lacunarity (0.043 ± 0.007) intermediate between that of HD (respectively 69.1 ± 6.4%, p < 0.05 and 0.087 ± 0.045, p < 0.05) and LD bones (respectively 88.8 ± 1.57% and 0.037 ± 0.014), but exhibited a higher surface density (5.56 ± 0.11 mm(2)/mm(3) vs. 2.06 ± 0.26 for LD, p < 0.05). LP granular arrangements created large pores coexisting with dense areas of material. Frontal plane analysis evidenced a more regular arrangement of β-TCP granules than bone trabecule. Stacks of HP granules represent a scaffold that resembles trabecular bone in its porous microarchitecture.

  7. New image compression scheme for digital angiocardiography application

    NASA Astrophysics Data System (ADS)

    Anastassopoulos, George C.; Lymberopoulos, Dimitris C.; Kotsopoulos, Stavros A.; Kokkinakis, George C.

    1993-06-01

    The present paper deals with the development and evaluation of a new compression scheme, for angiocardiography images. This scheme provides considerable compression of the medical data file, through two different stages. The first stage obliterates the redundancy inside a single frame domain since the second stage obliterates the redundancy among the sequential frames. Within these stages the employed data compression ratio can be easily adjusted according to the needs of the angiocardiography applications, where still or moving (in slow or full motion) images are hauled. The developed scheme has been tailored on the real needs of the diagnosis oriented conferencing-teleworking processes, where Unified Image Viewing facilities are required.

  8. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  9. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  10. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  11. Small angle X-ray scattering analysis of the effect of cold compaction of Al/MoO3 thermite composites.

    PubMed

    Hammons, Joshua A; Wang, Wei; Ilavsky, Jan; Pantoya, Michelle L; Weeks, Brandon L; Vaughn, Mark W

    2008-01-07

    Nanothermites composed of aluminum and molybdenum trioxide (MoO(3)) have a high energy density and are attractive energetic materials. To enhance the surface contact between the spherical Al nanoparticles and the sheet-like MoO(3) particles, the mixture can be cold-pressed into a pelleted composite. However, it was found that the burn rate of the pellets decreased as the density of the pellets increased, contrary to expectation. Ultra-small angle X-ray scattering (USAXS) data and scanning electron microscopy (SEM) were used to elucidate the internal structure of the Al nanoparticles, and nanoparticle aggregate in the composite. Results from both SEM imaging and USAXS analysis indicate that as the density of the pellet increased, a fraction of the Al nanoparticles are compressed into sintered aggregates. The sintered Al nanoparticles lost contrast after forming the larger aggregates and no longer scattered X-rays as individual particles. The sintered aggregates hinder the burn rate, since the Al nanoparticles that make them up can no longer diffuse freely as individual particles during combustion. Results suggest a qualitative relationship for the probability that nanoparticles will sinter, based on the particle sizes and the initial structure of their respective agglomerates, as characterized by the mass fractal dimension.

  12. Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Piao, Yan

    2018-04-01

    In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.

  13. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  14. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  15. Fractal electrodynamics via non-integer dimensional space approach

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  16. Fractal nematic colloids

    NASA Astrophysics Data System (ADS)

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.

  17. Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel

    PubMed Central

    Shanhua, Xu; Songbo, Ren; Youde, Wang

    2015-01-01

    To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel. PMID:26121468

  18. Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel.

    PubMed

    Shanhua, Xu; Songbo, Ren; Youde, Wang

    2015-01-01

    To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel.

  19. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  20. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  1. Subjective evaluations of integer cosine transform compressed Galileo solid state imagery

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Gold, Yaron; Grant, Terry; Chuang, Sherry

    1994-01-01

    This paper describes a study conducted for the Jet Propulsion Laboratory, Pasadena, California, using 15 evaluators from 12 institutions involved in the Galileo Solid State Imaging (SSI) experiment. The objective of the study was to determine the impact of integer cosine transform (ICT) compression using specially formulated quantization (q) tables and compression ratios on acceptability of the 800 x 800 x 8 monochromatic astronomical images as evaluated visually by Galileo SSI mission scientists. Fourteen different images in seven image groups were evaluated. Each evaluator viewed two versions of the same image side by side on a high-resolution monitor; each was compressed using a different q level. First the evaluators selected the image with the highest overall quality to support them in their visual evaluations of image content. Next they rated each image using a scale from one to five indicating its judged degree of usefulness. Up to four preselected types of images with and without noise were presented to each evaluator.

  2. Computational Simulation of Breast Compression Based on Segmented Breast and Fibroglandular Tissues on Magnetic Resonance Images

    PubMed Central

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-01-01

    This study presents a finite element based computational model to simulate the three-dimensional deformation of the breast and the fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and the craniocaudal and mediolateral oblique compression as used in mammography was applied. The geometry of whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the non-linear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in 4 cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these 4 cases at 60% compression ratio was in the range of 5-7 cm, which is the typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at 60% compression ratio was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on MRI, which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density measurements needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities – such as MRI, mammography, whole breast ultrasound, and molecular imaging – that are performed using different body positions and different compression conditions. PMID:20601773

  3. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  4. Depth to Curie temperature across the central Red Sea from magnetic data using the de-fractal method

    NASA Astrophysics Data System (ADS)

    Salem, Ahmed; Green, Chris; Ravat, Dhananjay; Singh, Kumar Hemant; East, Paul; Fairhead, J. Derek; Mogren, Saad; Biegert, Ed

    2014-06-01

    The central Red Sea rift is considered to be an embryonic ocean. It is characterised by high heat flow, with more than 90% of the heat flow measurements exceeding the world mean and high values extending to the coasts - providing good prospects for geothermal energy resources. In this study, we aim to map the depth to the Curie isotherm (580 °C) in the central Red Sea based on magnetic data. A modified spectral analysis technique, the “de-fractal spectral depth method” is developed and used to estimate the top and bottom boundaries of the magnetised layer. We use a mathematical relationship between the observed power spectrum due to fractal magnetisation and an equivalent random magnetisation power spectrum. The de-fractal approach removes the effect of fractal magnetisation from the observed power spectrum and estimates the parameters of depth to top and depth to bottom of the magnetised layer using iterative forward modelling of the power spectrum. We applied the de-fractal approach to 12 windows of magnetic data along a profile across the central Red Sea from onshore Sudan to onshore Saudi Arabia. The results indicate variable magnetic bottom depths ranging from 8.4 km in the rift axis to about 18.9 km in the marginal areas. Comparison of these depths with published Moho depths, based on seismic refraction constrained 3D inversion of gravity data, showed that the magnetic bottom in the rift area corresponds closely to the Moho, whereas in the margins it is considerably shallower than the Moho. Forward modelling of heat flow data suggests that depth to the Curie isotherm in the centre of the rift is also close to the Moho depth. Thus Curie isotherm depths estimated from magnetic data may well be imaging the depth to the Curie temperature along the whole profile. Geotherms constrained by the interpreted Curie isotherm depths have subsequently been calculated at three points across the rift - indicating the variation in the likely temperature profile with depth.

  5. Fractal-like kinetics of intracellular enzymatic reactions: a chemical framework of endotoxin tolerance and a possible non-specific contribution of macromolecular crowding to cross-tolerance.

    PubMed

    Vasilescu, Catalin; Olteanu, Mircea; Flondor, Paul; Calin, George A

    2013-09-14

    The response to endotoxin (LPS), and subsequent signal transduction lead to the production of cytokines such as tumor necrosis factor-α (TNF-α) by innate immune cells. Cells or organisms pretreated with endotoxin enter into a transient state of hyporesponsiveness, referred to as endotoxin tolerance (ET) which represents a particular case of negative preconditioning. Despite recent progress in understanding the molecular basis of ET, there is no consensus yet on the primary mechanism responsible for ET and for the more complex cases of cross tolerance. In this study, we examined the consequences of the macromolecular crowding (MMC) and of fractal-like kinetics (FLK) of intracellular enzymatic reactions on the LPS signaling machinery. We hypothesized that this particular type of enzyme kinetics may explain the development of ET phenomenon. Our aim in the present study was to characterize the chemical kinetics framework in ET and determine whether fractal-like kinetics explains, at least in part, ET. We developed an ordinary differential equations (ODE) mathematical model that took into account the links between the MMC and the LPS signaling machinery leading to ET. We proposed that the intracellular fractal environment (MMC) contributes to ET and developed two mathematical models of enzyme kinetics: one based on Kopelman's fractal-like kinetics framework and the other based on Savageau's power law model. Kopelman's model provides a good image of the potential influence of a fractal intracellular environment (MMC) on ET. The Savageau power law model also partially explains ET. The computer simulations supported the hypothesis that MMC and FLK may play a role in ET. The model highlights the links between the organization of the intracellular environment, MMC and the LPS signaling machinery leading to ET. Our FLK-based model does not minimize the role of the numerous negative regulatory factors. It simply draws attention to the fact that macromolecular crowding can contribute significantly to the induction of ET by imposing geometric constrains and a particular chemical kinetic for the intracellular reactions.

  6. An ultra-low-power image compressor for capsule endoscope.

    PubMed

    Lin, Meng-Chun; Dung, Lan-Rong; Weng, Ping-Kuo

    2006-02-25

    Gastrointestinal (GI) endoscopy has been popularly applied for the diagnosis of diseases of the alimentary canal including Crohn's Disease, Celiac disease and other malabsorption disorders, benign and malignant tumors of the small intestine, vascular disorders and medication related small bowel injury. The wireless capsule endoscope has been successfully utilized to diagnose diseases of the small intestine and alleviate the discomfort and pain of patients. However, the resolution of demosaicked image is still low, and some interesting spots may be unintentionally omitted. Especially, the images will be severely distorted when physicians zoom images in for detailed diagnosis. Increasing resolution may cause significant power consumption in RF transmitter; hence, image compression is necessary for saving the power dissipation of RF transmitter. To overcome this drawback, we have been developing a new capsule endoscope, called GICam. We developed an ultra-low-power image compression processor for capsule endoscope or swallowable imaging capsules. In applications of capsule endoscopy, it is imperative to consider battery life/performance trade-offs. Applying state-of-the-art video compression techniques may significantly reduce the image bit rate by their high compression ratio, but they all require intensive computation and consume much battery power. There are many fast compression algorithms for reducing computation load; however, they may result in distortion of the original image, which is not good for use in the medical care. Thus, this paper will first simplify traditional video compression algorithms and propose a scalable compression architecture. As the result, the developed video compressor only costs 31 K gates at 2 frames per second, consumes 14.92 mW, and reduces the video size by 75% at least.

  7. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  8. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  9. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    PubMed Central

    Khan, Tareq H.; Wahid, Khan A.

    2014-01-01

    In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression. PMID:25375753

  10. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  11. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  12. On use of image quality metrics for perceptual blur modeling: image/video compression case

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  13. Laser speckle imaging for lesion detection on tooth

    NASA Astrophysics Data System (ADS)

    Gavinho, Luciano G.; Silva, João. V. P.; Damazio, João. H.; Sfalcin, Ravana A.; Araujo, Sidnei A.; Pinto, Marcelo M.; Olivan, Silvia R. G.; Prates, Renato A.; Bussadori, Sandra K.; Deana, Alessandro M.

    2018-02-01

    Computer vision technologies for diagnostic imaging applied to oral lesions, specifically, carious lesions of the teeth, are in their early years of development. The relevance of this public problem, dental caries, worries countries around the world, as it affects almost the entire population, at least once in the life of each individual. The present work demonstrates current techniques for obtaining information about lesions on teeth by segmentation laser speckle imagens (LSI). Laser speckle image results from laser light reflection on a rough surface, and it was considered a noise but has important features that carry information about the illuminated surface. Even though these are basic images, only a few works have analyzed it by application of computer vision methods. In this article, we present the latest results of our group, in which Computer vision techniques were adapted to segment laser speckle images for diagnostic purposes. These methods are applied to the segmentation of images between healthy and lesioned regions of the tooth. These methods have proven to be effective in the diagnosis of early-stage lesions, often imperceptible in traditional diagnostic methods in the clinical practice. The first method uses first-order statistical models, segmenting the image by comparing the mean and standard deviation of the intensity of the pixels. The second method is based on the distance of the chi-square (χ2 ) between the histograms of the image, bringing a significant improvement in the precision of the diagnosis, while a third method introduces the use of fractal geometry, exposing, through of the fractal dimension, more precisely the difference between lesioned areas and healthy areas of a tooth compared to other methods of segmentation. So far, we can observe efficiency in the segmentation of the carious regions. A software was developed for the execution and demonstration of the applicability of the models

  14. Determination of the spectral dependence of reduced scattering and quantitative second-harmonic generation imaging for detection of fibrillary changes in ovarian cancer

    NASA Astrophysics Data System (ADS)

    Campbell, Kirby R.; Tilbury, Karissa B.; Campagnola, Paul J.

    2015-03-01

    Here, we examine ovarian cancer extracellular matrix (ECM) modification by measuring the wavelength dependence of optical scattering measurements and quantitative second-harmonic generation (SHG) imaging metrics in the range of 800-1100 nm in order to determine fibrillary changes in ex vivo normal ovary, type I, and type II ovarian cancer. Mass fractals of the collagen fiber structure is analyzed based on a power law correlation function using spectral dependence measurements of the reduced scattering coefficient μs' where the mass fractal dimension is related to the power. Values of μs' are measured using independent methods of determining the values of μs and g by on-axis attenuation measurements using the Beer-Lambert Law and by fitting the angular distribution of scattering to the Henyey-Greenstein phase function, respectively. Quantitativespectral SHG imaging on the same tissues determines FSHG/BSHG creation ratios related to size and harmonophore distributions. Both techniques probe fibril packing order, but the optical scattering probes structures of sizes from about 50-2000 nm where SHG imaging - although only able to resolve individual fibers - builds contrast from the assembly of fibrils. Our findings suggest that type I ovarian tumor structure has the most ordered collagen fibers followed by normal ovary then type II tumors showing the least order.

  15. Imaging surface contacts: Power law contact distributions and contact stresses in quartz, calcite, glass and acrylic plastic

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.D.

    1996-01-01

    A procedure has been developed to obtain microscope images of regions of contact between roughened surfaces of transparent materials, while the surfaces are subjected to static loads or undergoing frictional slip. Static loading experiments with quartz, calcite, soda-lime glass and acrylic plastic at normal stresses to 30 MPa yield power law distributions of contact areas from the smallest contacts that can be resolved (3.5 ??m2) up to a limiting size that correlates with the grain size of the abrasive grit used to roughen the surfaces. In each material, increasing normal stress results in a roughly linear increase of the real area of contact. Mechanisms of contact area increase are by growth of existing contacts, coalescence of contacts and appearance of new contacts. Mean contacts stresses are consistent with the indentation strength of each material. Contact size distributions are insensitive to normal stress indicating that the increase of contact area is approximately self-similar. The contact images and contact distributions are modeled using simulations of surfaces with random fractal topographies. The contact process for model fractal surfaces is represented by the simple expedient of removing material at regions where surface irregularities overlap. Synthetic contact images created by this approach reproduce observed characteristics of the contacts and demonstrate that the exponent in the power law distributions depends on the scaling exponent used to generate the surface topography.

  16. Fractal nematic colloids

    PubMed Central

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter. PMID:28117325

  17. Hierarchical socioeconomic fractality: The rich, the poor, and the middle-class

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Cohen, Morrel H.

    2014-05-01

    Since the seminal work of the Italian economist Vilfredo Pareto, the study of wealth and income has been a topic of active scientific exploration engaging researches ranging from economics and political science to econophysics and complex systems. This paper investigates the intrinsic fractality of wealth and income. To that end we introduce and characterize three forms of socioeconomic scale-invariance-poor fractality, rich fractality, and middle-class fractality-and construct hierarchical fractal approximations of general wealth and income distributions, based on the stitching of these three forms of fractality. Intertwining the theoretical results with real-world empirical data we then establish that the three forms of socioeconomic fractality-amalgamated into a composite hierarchical structure-underlie the distributions of wealth and income in human societies. We further establish that the hierarchical socioeconomic fractality of wealth and income is also displayed by empirical rank distributions observed across the sciences.

  18. Small-angle scattering from the Cantor surface fractal on the plane and the Koch snowflake

    NASA Astrophysics Data System (ADS)

    Cherny, Alexander Yu.; Anitas, Eugen M.; Osipov, Vladimir A.; Kuklin, Alexander I.

    The small-angle scattering (SAS) from the Cantor surface fractal on the plane and Koch snowflake is considered. We develop the construction algorithm for the Koch snowflake, which makes possible the recurrence relation for the scattering amplitude. The surface fractals can be decomposed into a sum of surface mass fractals for arbitrary fractal iteration, which enables various approximations for the scattering intensity. It is shown that for the Cantor fractal, one can neglect with a good accuracy the correlations between the mass fractal amplitudes, while for the Koch snowflake, these correlations are important. It is shown that nevertheless, the correlations can be build in the mass fractal amplitudes, which explains the decay of the scattering intensity $I(q)\\sim q^{D_{\\mathrm{s}}-4}$ with $1 < D_{\\mathrm{s}} < 2$ being the fractal dimension of the perimeter. The curve $I(q)q^{4-D_{\\mathrm{s}}}$ is found to be log-periodic in the fractal region with the period equal to the scaling factor of the fractal. The log-periodicity arises from the self-similarity of sizes of basic structural units rather than from correlations between their distances. A recurrence relation is obtained for the radius of gyration of Koch snowflake, which is solved in the limit of infinite iterations. The present analysis allows us to obtain additional information from SAS data, such as the edges of the fractal regions, the fractal iteration number and the scaling factor.

  19. Joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-07-01

    This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.

  20. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

Top