Fractal image coding based on replaced domain pools
NASA Astrophysics Data System (ADS)
Harada, Masaki; Fujii, Toshiaki; Kimoto, Tadahiko; Tanimoto, Masayuki
1998-01-01
Fractal image coding based on iterated function system has been attracting much interest because of the possibilities of drastic data compression. It performs compression by using the self-similarity included in an image. In the conventional schemes, under the assumption of the self- similarity in the image, each block is mapped from that larger block which is considered as the most suitable block to approximate the range block. However, even if the exact self-similarity of an image is found at the encoder, it hardly holds at the decoder because a domain pool of the encoder is different from that of the decoder. In this paper, we prose a fractal image coding scheme by using domain pools replaced with decoded or transformed values to reduce the difference between the domain pools of the encoder and that of the decoder. The proposed scheme performs two-stage encoding. The domain pool is replaced with decoded non-contractive blocks first and then with transformed values for contractive blocks. It is expected that the proposed scheme reduces errors of contractive blocks in the reconstructed image while those of non- contractive blocks are kept unchanged. The experimental result show the effectiveness of the proposed scheme.
NASA Technical Reports Server (NTRS)
Barnsley, Michael F.; Sloan, Alan D.
1989-01-01
Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.
Fractal coding of wavelet image based on human vision contrast-masking effect
NASA Astrophysics Data System (ADS)
Wei, Hai; Shen, Lansun
2000-06-01
In this paper, a fractal-based compression approach of wavelet image is presented. The scheme tries to make full use of the sensitivity features of the human visual system. With the wavelet-based multi-resolution representation of image, detail vectors of each high frequency sub-image are constructed in accordance with its spatial orientation in order to grasp the edge information to which human observer is sensitive. Then a multi-level selection algorithm based on human vision's contrast masking effect is proposed to make the decision whether a detail vector is coded or not. Those vectors below the contrast threshold are discarded without introducing visual artifacts because of the ignorance of human vision. As for the redundancy of the retained vectors, different fractal- based methods are employed to decrease the correlation in single sub-image and between the different resolution sub- images with the same orientation. Experimental results suggest the efficiency of the proposed scheme. With the standard test image, our approach outperforms the EZW algorithm and the JPEG method.
Region-based fractal video coding
NASA Astrophysics Data System (ADS)
Zhu, Shiping; Belloulata, Kamel
2008-10-01
A novel video sequence compression scheme is proposed in order to realize the efficient and economical transmission of video sequence, and also the region-based functionality of MPEG-4. The CPM and NCIM fractal coding scheme is applied on each region independently by a prior image segmentation map (alpha plane) which is exactly the same as defined in MPEG-4. The first n frames of video sequence are encoded as a "set" using the Circular Prediction Mapping (CPM) and encode the remaining frames using the Non Contractive Interframe Mapping (NCIM). The CPM and NCIM accomplish the motion estimation and compensation, which can exploit the high temporal correlations between the adjacent frames of video sequence. The experimental results with the monocular video sequences provide promising performances at low bit rate coding, such as the application in video conference. We believe the proposed fractal video codec will be a powerful and efficient technique for the region-based video sequence coding.
Fractal images induce fractal pupil dilations and constrictions.
Moon, P; Muday, J; Raynor, S; Schirillo, J; Boydston, C; Fairbanks, M S; Taylor, R P
2014-09-01
Fractals are self-similar structures or patterns that repeat at increasingly fine magnifications. Research has revealed fractal patterns in many natural and physiological processes. This article investigates pupillary size over time to determine if their oscillations demonstrate a fractal pattern. We predict that pupil size over time will fluctuate in a fractal manner and this may be due to either the fractal neuronal structure or fractal properties of the image viewed. We present evidence that low complexity fractal patterns underlie pupillary oscillations as subjects view spatial fractal patterns. We also present evidence implicating the autonomic nervous system's importance in these patterns. Using the variational method of the box-counting procedure we demonstrate that low complexity fractal patterns are found in changes within pupil size over time in millimeters (mm) and our data suggest that these pupillary oscillation patterns do not depend on the fractal properties of the image viewed.
ERIC Educational Resources Information Center
Jurgens, Hartmut; And Others
1990-01-01
The production and application of images based on fractal geometry are described. Discussed are fractal language groups, fractal image coding, and fractal dialects. Implications for these applications of geometry to mathematics education are suggested. (CW)
Suboptimal fractal coding scheme using iterative transformation
NASA Astrophysics Data System (ADS)
Kang, Hyun-Soo; Chung, Jae-won
2001-05-01
This paper presents a new fractal coding scheme to find a suboptimal transformation by performing an iterative encoding process. The optimal transformation can be defined as the transformation generating the closest attractor to an original image. Unfortunately, it is impossible in practice to find the optimal transformation, due to the heavy computational burden. In this paper, however, by means of some new theorems related with contractive transformations and attractors. It is shown that for some specific cases the optimal or suboptimal transformations can be obtained. The proposed method obtains a suboptimal transformation by performing iterative processes as is done in decoding. Thus, it requires more computation than the conventional method, but it improves the image quality. For a simple case where the optimal transformation can actually be found, the proposed method is experimentally evaluated against both the optimal method and the conventional method. For a general case where the optimal transformation in unavailable due to heavy computational complexity, the proposed method is also evaluated in comparison with the conventional method.
A Novel Fractal Coding Method Based on M-J Sets
Sun, Yuanyuan; Xu, Rudan; Chen, Lina; Kong, Ruiqing; Hu, Xiaopeng
2014-01-01
In this paper, we present a novel fractal coding method with the block classification scheme based on a shared domain block pool. In our method, the domain block pool is called dictionary and is constructed from fractal Julia sets. The image is encoded by searching the best matching domain block with the same BTC (Block Truncation Coding) value in the dictionary. The experimental results show that the scheme is competent both in encoding speed and in reconstruction quality. Particularly for large images, the proposed method can avoid excessive growth of the computational complexity compared with the traditional fractal coding algorithm. PMID:25010686
Study on Huber fractal image compression.
Jeng, Jyh-Horng; Tseng, Chun-Chieh; Hsieh, Jer-Guang
2009-05-01
In this paper, a new similarity measure for fractal image compression (FIC) is introduced. In the proposed Huber fractal image compression (HFIC), the linear Huber regression technique from robust statistics is embedded into the encoding procedure of the fractal image compression. When the original image is corrupted by noises, we argue that the fractal image compression scheme should be insensitive to those noises presented in the corrupted image. This leads to a new concept of robust fractal image compression. The proposed HFIC is one of our attempts toward the design of robust fractal image compression. The main disadvantage of HFIC is the high computational cost. To overcome this drawback, particle swarm optimization (PSO) technique is utilized to reduce the searching time. Simulation results show that the proposed HFIC is robust against outliers in the image. Also, the PSO method can effectively reduce the encoding time while retaining the quality of the retrieved image.
Fractal-based image edge detection
NASA Astrophysics Data System (ADS)
Luo, Huiguo; Zhu, Yaoting; Zhu, Guang-Xi; Wan, Faguang; Zhang, Ping
1993-08-01
Image edge is an important feature of image. Usually, we use Laplacian or Sober operator to get an image edge. In this paper, we use fractal method to get the edge. After introducing Fractal Brownian Random (FBR) field, we give the definition of Discrete Fractal Brownian Increase Random (DFBIR) field and discuss its properties, then we apply the DFBIR field to detect the edge of an image. According to the parameters H and D of DFBIR, we give a measure M equals (alpha) H + (beta) D. From the M value of each pixel, we can detect the edge of image.
Fractal Image Informatics: from SEM to DEM
NASA Astrophysics Data System (ADS)
Oleschko, K.; Parrot, J.-F.; Korvin, G.; Esteves, M.; Vauclin, M.; Torres-Argüelles, V.; Salado, C. Gaona; Cherkasov, S.
2008-05-01
In this paper, we introduce a new branch of Fractal Geometry: Fractal Image Informatics, devoted to the systematic and standardized fractal analysis of images of natural systems. The methods of this discipline are based on the properties of multiscale images of selfaffine fractal surfaces. As proved in the paper, the image inherits the scaling and lacunarity of the surface and of its reflectance distribution [Korvin, 2005]. We claim that the fractal analysis of these images must be done without any smoothing, thresholding or binarization. Two new tools of Fractal Image Informatics, firmagram analysis (FA) and generalized lacunarity (GL), are presented and discussed in details. These techniques are applicable to any kind of image or to any observed positive-valued physical field, and can be used to correlate between images. It will be shown, by a modified Grassberger-Hentschel-Procaccia approach [Phys. Lett. 97A, 227 (1983); Physica 8D, 435 (1983)] that GL obeys the same scaling law as the Allain-Cloitre lacunarity [Phys. Rev. A 44, 3552 (1991)] but is free of the problems associated with gliding boxes. Several applications are shown from Soil Physics, Surface Science, and other fields.
Pre-Service Teachers' Concept Images on Fractal Dimension
ERIC Educational Resources Information Center
Karakus, Fatih
2016-01-01
The analysis of pre-service teachers' concept images can provide information about their mental schema of fractal dimension. There is limited research on students' understanding of fractal and fractal dimension. Therefore, this study aimed to investigate the pre-service teachers' understandings of fractal dimension based on concept image. The…
ERIC Educational Resources Information Center
Osler, Thomas J.
1999-01-01
Because fractal images are by nature very complex, it can be inspiring and instructive to create the code in the classroom and watch the fractal image evolve as the user slowly changes some important parameter or zooms in and out of the image. Uses programming language that permits the user to store and retrieve a graphics image as a disk file.…
MRI Image Processing Based on Fractal Analysis
Marusina, Mariya Y; Mochalina, Alexandra P; Frolova, Ekaterina P; Satikov, Valentin I; Barchuk, Anton A; Kuznetcov, Vladimir I; Gaidukov, Vadim S; Tarakanov, Segrey A
2017-01-01
Background: Cancer is one of the most common causes of human mortality, with about 14 million new cases and 8.2 million deaths reported in in 2012. Early diagnosis of cancer through screening allows interventions to reduce mortality. Fractal analysis of medical images may be useful for this purpose. Materials and Methods: In this study, we examined magnetic resonance (MR) images of healthy livers and livers containing metastases from colorectal cancer. The fractal dimension and the Hurst exponent were chosen as diagnostic features for tomographic imaging using Image J software package for image processings FracLac for applied for fractal analysis with a 120x150 pixel area. Calculations of the fractal dimensions of pathological and healthy tissue samples were performed using the box-counting method. Results: In pathological cases (foci formation), the Hurst exponent was less than 0.5 (the region of unstable statistical characteristics). For healthy tissue, the Hurst index is greater than 0.5 (the zone of stable characteristics). Conclusions: The study indicated the possibility of employing fractal rapid analysis for the detection of focal lesions of the liver. The Hurst exponent can be used as an important diagnostic characteristic for analysis of medical images.
Image Segmentation via Fractal Dimension
1987-12-01
statistical expectation K = a proportionality constant H = the Hurst exponent , in interval [0,1] (14:249) Eq (4) is a mathematical generalization of...ease, negatively correlated (24:16). The Hurst exponent is directly related to the fractal diment.ion of the process being modelled by the relation (24...24) DzE.I -H (5) where D = the fractal dimension E m the Euclidean dimension H = the Hurst exponent The effect of N1 on a typical trace can be seen
Pyramidal fractal dimension for high resolution images.
Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2016-07-01
Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.
Fresnel diffraction of fractal grating and self-imaging effect.
Wang, Junhong; Zhang, Wei; Cui, Yuwei; Teng, Shuyun
2014-04-01
Based on the self-similarity property of fractal, two types of fractal gratings are produced according to the production and addition operations of multiple periodic gratings. Fresnel diffractions of fractal grating are analyzed theoretically, and the general mathematic expressions of the diffraction intensity distributions of fractal grating are deduced. The gray-scale patterns of the 2D diffraction distributions of fractal grating are provided through numerical calculations. The diffraction patterns take on the periodicity along the longitude and transverse directions. The 1D diffraction distribution at some certain distances shows the same structure as the fractal grating. This indicates that the self-image of fractal grating is really formed in the Fresnel diffraction region. The experimental measurement of the diffraction intensity distribution of fractal grating with different fractal dimensions and different fractal levels is performed, and the self-images of fractal grating are obtained successfully in experiments. The conclusions of this paper are helpful for the development of the application of fractal grating.
Fractal image compression: A resolution independent representation for imagery
NASA Technical Reports Server (NTRS)
Sloan, Alan D.
1993-01-01
A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Smitha, K A; Gupta, A K; Jayasree, R S
2015-09-07
Glioma, the heterogeneous tumors originating from glial cells, generally exhibit varied grades and are difficult to differentiate using conventional MR imaging techniques. When this differentiation is crucial in the disease prognosis and treatment, even the advanced MR imaging techniques fail to provide a higher discriminative power for the differentiation of malignant tumor from benign ones. A powerful image processing technique applied to the imaging techniques is expected to provide a better differentiation. The present study focuses on the fractal analysis of fluid attenuation inversion recovery MR images, for the differentiation of glioma. For this, we have considered the most important parameters of fractal analysis, fractal dimension and lacunarity. While fractal analysis assesses the malignancy and complexity of a fractal object, lacunarity gives an indication on the empty space and the degree of inhomogeneity in the fractal objects. Box counting method with the preprocessing steps namely binarization, dilation and outlining was used to obtain the fractal dimension and lacunarity in glioma. Statistical analysis such as one-way analysis of variance and receiver operating characteristic (ROC) curve analysis helped to compare the mean and to find discriminative sensitivity of the results. It was found that the lacunarity of low and high grade gliomas vary significantly. ROC curve analysis between low and high grade glioma for fractal dimension and lacunarity yielded 70.3% sensitivity and 66.7% specificity and 70.3% sensitivity and 88.9% specificity, respectively. The study observes that fractal dimension and lacunarity increases with an increase in the grade of glioma and lacunarity is helpful in identifying most malignant grades.
Blind Detection of Region Duplication Forgery Using Fractal Coding and Feature Matching.
Jenadeleh, Mohsen; Ebrahimi Moghaddam, Mohsen
2016-05-01
Digital image forgery detection is important because of its wide use in applications such as medical diagnosis, legal investigations, and entertainment. Copy-move forgery is one of the famous techniques, which is used in region duplication. Many of the existing copy-move detection algorithms cannot effectively blind detect duplicated regions that are made by powerful image manipulation software like Photoshop. In this study, a new method is proposed for blind detecting manipulations in digital images based on modified fractal coding and feature vector matching. The proposed method not only detects typical copy-move forgery, but also finds multiple copied forgery regions for images that are subjected to rotation, scaling, reflection, and a mixture of these postprocessing operations. The proposed method is robust against tampered images undergoing attacks such as Gaussian blurring, contrast scaling, and brightness adjustment. The experimental results demonstrated the validity and efficiency of the method.
NASA Astrophysics Data System (ADS)
Tremberger, George, Jr.; Flamholz, A.; Cheung, E.; Sullivan, R.; Subramaniam, R.; Schneider, P.; Brathwaite, G.; Boteju, J.; Marchese, P.; Lieberman, D.; Cheung, T.; Holden, Todd
2007-09-01
The absorption effect of the back surface boundary of a diffuse layer was studied via laser generated reflection speckle pattern. The spatial speckle intensity provided by a laser beam was measured. The speckle data were analyzed in terms of fractal dimension (computed by NIH ImageJ software via the box counting fractal method) and weak localization theory based on Mie scattering. Bar code imaging was modeled as binary absorption contrast and scanning resolution in millimeter range was achieved for diffusive layers up to thirty transport mean free path thick. Samples included alumina, porous glass and chicken tissue. Computer simulation was used to study the effect of speckle spatial distribution and observed fractal dimension differences were ascribed to variance controlled speckle sizes. Fractal dimension suppressions were observed in samples that had thickness dimensions around ten transport mean free path. Computer simulation suggested a maximum fractal dimension of about 2 and that subtracting information could lower fractal dimension. The fractal dimension was shown to be sensitive to sample thickness up to about fifteen transport mean free paths, and embedded objects which modified 20% or more of the effective thickness was shown to be detectable. The box counting fractal method was supplemented with the Higuchi data series fractal method and application to architectural distortion mammograms was demonstrated. The use of fractals in diffusive analysis would provide a simple language for a dialog between optics experts and mammography radiologists, facilitating the applications of laser diagnostics in tissues.
Fractal dimension of cerebral surfaces using magnetic resonance images
Majumdar, S.; Prasad, R.R.
1988-11-01
The calculation of the fractal dimension of the surface bounded by the grey matter in the normal human brain using axial, sagittal, and coronal cross-sectional magnetic resonance (MR) images is presented. The fractal dimension in this case is a measure of the convolutedness of this cerebral surface. It is proposed that the fractal dimension, a feature that may be extracted from MR images, may potentially be used for image analysis, quantitative tissue characterization, and as a feature to monitor and identify cerebral abnormalities and developmental changes.
Embedded foveation image coding.
Wang, Z; Bovik, A C
2001-01-01
The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.
1998-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and
Evaluation of Two Fractal Methods for Magnetogram Image Analysis
NASA Technical Reports Server (NTRS)
Stark, B.; Adams, M.; Hathaway, D. H.; Hagyard, M. J.
1997-01-01
Fractal and multifractal techniques have been applied to various types of solar data to study the fractal properties of sunspots as well as the distribution of photospheric magnetic fields and the role of random motions on the solar surface in this distribution. Other research includes the investigation of changes in the fractal dimension as an indicator for solar flares. Here we evaluate the efficacy of two methods for determining the fractal dimension of an image data set: the Differential Box Counting scheme and a new method, the Jaenisch scheme. To determine the sensitivity of the techniques to changes in image complexity, various types of constructed images are analyzed. In addition, we apply this method to solar magnetogram data from Marshall Space Flight Centers vector magnetograph.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Imai, K; Ikeda, M; Enchi, Y; Niimi, T
2009-12-01
The purposes of our studies are to examine whether or not fractal-feature distance deduced from virtual volume method can simulate observer performance indices and to investigate the physical meaning of pseudo fractal dimension and complexity. Contrast-detail (C-D) phantom radiographs were obtained at various mAs values (0.5 - 4.0 mAs) and 140 kVp with a computed radiography system, and the reference image was acquired at 13 mAs. For all C-D images, fractal analysis was conducted using the virtual volume method that was devised with a fractional Brownian motion model. The fractal-feature distances between the considered and reference images were calculated using pseudo fractal dimension and complexity. Further, we have performed the C-D analysis in which ten radiologists participated, and compared the fractal-feature distances with the image quality figures (IQF). To clarify the physical meaning of the pseudo fractal dimension and complexity, contrast-to-noise ratio (CNR) and standard deviation (SD) of images noise were calculated for each mAs and compared with the pseudo fractal dimension and complexity, respectively. A strong linear correlation was found between the fractal-feature distance and IQF. The pseudo fractal dimensions became large as CNR increased. Further, a linear correlation was found between the exponential complexity and image noise SD.
Confocal coded aperture imaging
Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.
2001-01-01
A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.
Bingham, Philip R; Santos-Villalobos, Hector J
2011-01-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.
NASA Astrophysics Data System (ADS)
Bingham, Philip; Santos-Villalobos, Hector; Tobin, Ken
2011-03-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole diameters show resolutions matching the hole diameters.
Person identification using fractal analysis of retina images
NASA Astrophysics Data System (ADS)
Ungureanu, Constantin; Corniencu, Felicia
2004-10-01
Biometric is automated method of recognizing a person based on physiological or behavior characteristics. Among the features measured are retina scan, voice, and fingerprint. A retina-based biometric involves the analysis of the blood vessels situated at the back of the eye. In this paper we present a method, which uses the fractal analysis to characterize the retina images. The Fractal Dimension (FD) of retina vessels was measured for a number of 20 images and have been obtained different values of FD for each image. This algorithm provides a good accuracy is cheap and easy to implement.
Fractal-based image texture analysis of trabecular bone architecture.
Jiang, C; Pitt, R E; Bertram, J E; Aneshansley, D J
1999-07-01
Fractal-based image analysis methods are investigated to extract textural features related to the anisotropic structure of trabecular bone from the X-ray images of cubic bone specimens. Three methods are used to quantify image textural features: power spectrum, Minkowski dimension and mean intercept length. The global fractal dimension is used to describe the overall roughness of the image texture. The anisotropic features formed by the trabeculae are characterised by a fabric ellipse, whose orientation and eccentricity reflect the textural anisotropy of the image. Tests of these methods with synthetic images of known fractal dimension show that the Minkowski dimension provides a more accurate and consistent estimation of global fractal dimension. Tests on bone x-ray (eccentricity range 0.25-0.80) images indicate that the Minkowski dimension is more sensitive to the changes in textural orientation. The results suggest that the Minkowski dimension is a better measure for characterising trabecular bone anisotropy in the x-ray images of thick specimens.
Trabecular architecture analysis in femur radiographic images using fractals.
Udhayakumar, G; Sujatha, C M; Ramakrishnan, S
2013-04-01
Trabecular bone is a highly complex anisotropic material that exhibits varying magnitudes of strength in compression and tension. Analysis of the trabecular architectural alteration that manifest as loss of trabecular plates and connection has been shown to yield better estimation of bone strength. In this work, an attempt has been made toward the development of an automated system for investigation of trabecular femur bone architecture using fractal analysis. Conventional radiographic femur bone images recorded using standard protocols are used in this study. The compressive and tensile regions in the images are delineated using preprocessing procedures. The delineated images are analyzed using Higuchi's fractal method to quantify pattern heterogeneity and anisotropy of trabecular bone structure. The results show that the extracted fractal features are distinct for compressive and tensile regions of normal and abnormal human femur bone. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
Liver ultrasound image classification by using fractal dimension of edge
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita
2012-08-01
Medical ultrasound image edge detection is an important component in increasing the number of application of segmentation, and hence it has been subject of many studies in the literature. In this study, we have classified the liver ultrasound images (US) combining Canny and Sobel edge detectors with fractal analysis in order to provide an indicator about of the US images roughness. We intend to provide a classification rule of the focal liver lesions as: cirrhotic liver, liver hemangioma and healthy liver. For edges detection the Canny and Sobel operators were used. Fractal analyses have been applied for texture analysis and classification of focal liver lesions according to fractal dimension (FD) determined by using the Box Counting method. To assess the performance and accuracy rate of the proposed method the contrast-to-noise (CNR) is analyzed.
Fractal image perception provides novel insights into hierarchical cognition.
Martins, M J; Fischmeister, F P; Puig-Waldmüller, E; Oh, J; Geissler, A; Robinson, S; Fitch, W T; Beisteiner, R
2014-08-01
Hierarchical structures play a central role in many aspects of human cognition, prominently including both language and music. In this study we addressed hierarchy in the visual domain, using a novel paradigm based on fractal images. Fractals are self-similar patterns generated by repeating the same simple rule at multiple hierarchical levels. Our hypothesis was that the brain uses different resources for processing hierarchies depending on whether it applies a "fractal" or a "non-fractal" cognitive strategy. We analyzed the neural circuits activated by these complex hierarchical patterns in an event-related fMRI study of 40 healthy subjects. Brain activation was compared across three different tasks: a similarity task, and two hierarchical tasks in which subjects were asked to recognize the repetition of a rule operating transformations either within an existing hierarchical level, or generating new hierarchical levels. Similar hierarchical images were generated by both rules and target images were identical. We found that when processing visual hierarchies, engagement in both hierarchical tasks activated the visual dorsal stream (occipito-parietal cortex, intraparietal sulcus and dorsolateral prefrontal cortex). In addition, the level-generating task specifically activated circuits related to the integration of spatial and categorical information, and with the integration of items in contexts (posterior cingulate cortex, retrosplenial cortex, and medial, ventral and anterior regions of temporal cortex). These findings provide interesting new clues about the cognitive mechanisms involved in the generation of new hierarchical levels as required for fractals.
Wideband fractal antennas for holographic imaging and rectenna applications
NASA Astrophysics Data System (ADS)
Bunch, Kyle J.; McMakin, Douglas L.; Sheen, David M.
2008-04-01
At Pacific Northwest National Laboratory, wideband antenna arrays have been successfully used to reconstruct three-dimensional images at microwave and millimeter-wave frequencies. Applications of this technology have included portal monitoring, through-wall imaging, and weapons detection. Fractal antennas have been shown to have wideband characteristics due to their self-similar nature (that is, their geometry is replicated at different scales). They further have advantages in providing good characteristics in a compact configuration. We discuss the application of fractal antennas for holographic imaging. Simulation results will be presented. Rectennas are a specific class of antennas in which a received signal drives a nonlinear junction and is retransmitted at either a harmonic frequency or a demodulated frequency. Applications include tagging and tracking objects with a uniquely-responding antenna. It is of interest to consider fractal rectenna because the self-similarity of fractal antennas tends to make them have similar resonance behavior at multiples of the primary resonance. Thus, fractal antennas can be suited for applications in which a signal is reradiated at a harmonic frequency. Simulations will be discussed with this application in mind.
Wideband Fractal Antennas for Holographic Imaging and Rectenna Applications
Bunch, Kyle J.; McMakin, Douglas L.; Sheen, David M.
2008-04-18
At Pacific Northwest National Laboratory, wideband antenna arrays have been successfully used to reconstruct three-dimensional images at microwave and millimeter-wave frequencies. Applications of this technology have included portal monitoring, through-wall imaging, and weapons detection. Fractal antennas have been shown to have wideband characteristics due to their self-similar nature (that is, their geometry is replicated at different scales). They further have advantages in providing good characteristics in a compact configuration. We discuss the application of fractal antennas for holographic imaging. Simulation results will be presented. Rectennas are a specific class of antennas in which a received signal drives a nonlinear junction and is retransmitted at either a harmonic frequency or a demodulated frequency. Applications include tagging and tracking objects with a uniquely-responding antenna. It is of interest to consider fractal rectenna because the self-similarity of fractal antennas tends to make them have similar resonance behavior at multiples of the primary resonance. Thus, fractal antennas can be suited for applications in which a signal is reradiated at a harmonic frequency. Simulations will be discussed with this application in mind.
Fractal analysis for reduced reference image quality assessment.
Xu, Yong; Liu, Delei; Quan, Yuhui; Le Callet, Patrick
2015-07-01
In this paper, multifractal analysis is adapted to reduced-reference image quality assessment (RR-IQA). A novel RR-QA approach is proposed, which measures the difference of spatial arrangement between the reference image and the distorted image in terms of spatial regularity measured by fractal dimension. An image is first expressed in Log-Gabor domain. Then, fractal dimensions are computed on each Log-Gabor subband and concatenated as a feature vector. Finally, the extracted features are pooled as the quality score of the distorted image using l1 distance. Compared with existing approaches, the proposed method measures image quality from the perspective of the spatial distribution of image patterns. The proposed method was evaluated on seven public benchmark data sets. Experimental results have demonstrated the excellent performance of the proposed method in comparison with state-of-the-art approaches.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Quattrochi, Dale A.; Luvall, Jeffrey C.
1997-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely scale-independent. Self-similarity is a property of curves or surfaces where each part is indistinguishable from the whole. The fractal dimension D of remote sensing data yields quantitative insight on the spatial complexity and information content contained within these data. Analyses of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed(l0 to 80 meters). The forested scene behaves as one would expect-larger pixel sizes decrease the complexity of the image as individual clumps of trees are averaged into larger blocks. The increased complexity of the agricultural image with increasing pixel size results from the loss of homogeneous groups of pixels in the large fields to mixed pixels composed of varying combinations of NDVI values that correspond to roads and vegetation. The same process occur's in the urban image to some extent, but the lack of large, homogeneous areas in the high resolution NDVI image means the initially high D value is maintained as pixel size increases. The slope of the fractal dimension-resolution relationship provides indications of how image classification or feature identification will be affected by changes in sensor resolution.
Fractal descriptors for discrimination of microscopy images of plant leaves
NASA Astrophysics Data System (ADS)
Silva, N. R.; Florindo, J. B.; Gómez, M. C.; Kolb, R. M.; Bruno, O. M.
2014-03-01
This study proposes the application of fractal descriptors method to the discrimination of microscopy images of plant leaves. Fractal descriptors have demonstrated to be a powerful discriminative method in image analysis, mainly for the discrimination of natural objects. In fact, these descriptors express the spatial arrangement of pixels inside the texture under different scales and such arrangements are directly related to physical properties inherent to the material depicted in the image. Here, we employ the Bouligand-Minkowski descriptors. These are obtained by the dilation of a surface mapping the gray-level texture. The classification of the microscopy images is performed by the well-known Support Vector Machine (SVM) method and we compare the success rate with other literature texture analysis methods. The proposed method achieved a correctness rate of 89%, while the second best solution, the Co-occurrence descriptors, yielded only 78%. This clear advantage of fractal descriptors demonstrates the potential of such approach in the analysis of the plant microscopy images.
Fractal Image Filters for Specialized Image Recognition Tasks
2010-02-11
Mendoza , Cambridge University Press, Cambridge, 1995. [20] J . Kigami, Harmonic Calculus on p.c.f. Self-similar Sets, Trans. Amer. Math. Soc. 335 (1993) 721...Transformations between fractals, Progress in Probability, 61 (2009) 227-250. [4] M. F. Barnsley, J . Hutchinson, Ö. Sten�o, V-variable fractals: fractals with...convex body K R2 such that F (K) K. The associated Hilbert metric dH is de�ned on K by dH(x; y) = ln jR (x; y; a; b) j for all x; y 2 K with x 6
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Fractal analysis of high-resolution CT images as a tool for quantification of lung diseases
Uppaluri, R.; Mitsa, T.; Galvin, J.R.
1995-12-31
Fractal geometry is increasingly being used to model complex naturally occurring phenomena. There are two types of fractals in nature-geometric fractals and stochastic fractals. The pulmonary branching structure is a geometric fractal and the intensity of its grey scale image is a stochastic fractal. In this paper, the authors attempt to quantify the texture of CT lung images using properties of both types of fractals. A simple algorithm for detecting of abnormality in human lungs, based on 2-D and 3-D fractal dimensions, is presented. This method involves calculating the local fractal dimensions, based on intensities, in the 2-D slice to air edge enhancement. Following this, grey level thresholding is performed and a global fractal dimension, based on structure, for the entire data is estimated in 2-D and 3-D. High Resolution CT images of normal and abnormal lungs were analyzed. Preliminary results showed that classification of normal and abnormal images could be obtained based on the differences between their global fractal dimensions.
A Digital Image Steganography using Sierpinski Gasket Fractal and PLSB
NASA Astrophysics Data System (ADS)
Rupa, Ch.
2013-09-01
Information attacks are caused due to the weaknesses in the information security. These attacks affect the non-repudiation and integrity of the security services. In this paper, a novel security approach that can defend the message from information attacks is proposed. In this new approach the original message is encrypted by Fractal Sierpinski Gasket (FSG) cryptographic algorithm, that result is hidden into an image using penultimate and least significant bit (PLSB) embedding method. This method makes the security more robust than conventional approach. Important properties of fractals are sensitivity and self similarity. These can be exploited to produce avalanche effect. Stegoanalysis of the proposed method shows that it is resistant to various attacks and stronger than the existed steganographic approaches.
Fractal analysis of palmar electronographic images. Medical anthropological perspectives.
Guja, Cornelia; Voinea, V; Baciu, Adina; Ciuhuţa, M; Crişan, Daniela A
2008-01-01
The present paper brings to the medical specialists' attention a possibility of multivalent imagistic investigation--the palmar electrographic method submitted to a totally new analysis by the fractal method. Its support for information recording is the radiosensitive film. This makes it resemble the radiological investigation, which opened the way of correlating the shape of certain structures of the organism with their function. By the specific electromagnetic impressing of the ultra photosensitive film, palmar electrography has the advantage of catching the shape of certain radiative phenomena, generated by certain structures in their functional dynamics--at the level of the human palmar tegument. This makes it resemble the EEG, EKG and EMG investigations. The purpose of this presentation is to highlight a new modality of studying the states of the human organism in its permanent adaptation to the living environment, using a new anthropological, informational vision--by fractal processing and by the couple of concepts system / interface--much closer to reality than the present systemic thinking. The human palm, which has a special medial-anthropological relevance, is analysed as a complex adaptive biological and socio-cultural interface between the internal and external environment. The fractal phenomena recorded on the image are ubicuitary in nature and especially in the living world and their shapes may he described mathematically and used for decoding their informational laws. They may have very useful implications in the medical act. The paper presents a few introductory elements to the fractal theory, and, in the final part, the pursued objectives are concretely shown by grouping the EG images according to certain more important medical-anthropological themes.
MR imaging and osteoporosis: fractal lacunarity analysis of trabecular bone.
Zaia, Annamaria; Eleonori, Roberta; Maponi, Pierluigi; Rossi, Roberto; Murri, Roberto
2006-07-01
We develop a method of magnetic resonance (MR) image analysis able to provide parameter(s) sensitive to bone microarchitecture changes in aging, and to osteoporosis onset and progression. The method has been built taking into account fractal properties of many anatomic and physiologic structures. Fractal lacunarity analysis has been used to determine relevant parameter(s) to differentiate among three types of trabecular bone structure (healthy young, healthy perimenopausal, and osteoporotic patients) from lumbar vertebra MR images. In particular, we propose to approximate the lacunarity function by a hyperbola model function that depends on three coefficients, alpha, beta, and gamma, and to compute these coefficients as the solution of a least squares problem. This triplet of coefficients provides a model function that better represents the variation of mass density of pixels in the image considered. Clinical application of this preliminary version of our method suggests that one of the three coefficients, beta, may represent a standard for the evaluation of trabecular bone architecture and a potentially useful parametric index for the early diagnosis of osteoporosis.
SAR image post-processing for the estimation of fractal parameters
NASA Astrophysics Data System (ADS)
Di Martino, Gerardo; Riccio, Daniele; Ruello, Giuseppe; Zinno, Ivana
2011-11-01
In this paper a fractal based processing for the analysis of SAR images of natural surfaces is presented. Its definition is based on a complete direct imaging model developed by the authors. The application of this innovative algorithm to SAR images makes possible to obtain complete maps of the two key parameters of a fractal scene: the fractal dimension and the increment standard deviation. The fractal parameters extraction is based on the estimation of the power spectral density of the SAR amplitude image. From a theoretic point of view, the attention is focused on the retrieving procedure of the increment standard deviation, here presented for the first time. In the last section of the paper, the application of the introduced processing to high resolution SAR images is presented, with the relevant maps of the fractal dimension and of the increment standard deviation.
High compression image and image sequence coding
NASA Technical Reports Server (NTRS)
Kunt, Murat
1989-01-01
The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.
Wang, Jianji; Zheng, Nanning
2013-09-01
Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.
Fractal analysis of AFM images of the surface of Bowman's membrane of the human cornea.
Ţălu, Ştefan; Stach, Sebastian; Sueiras, Vivian; Ziebarth, Noël Marysa
2015-04-01
The objective of this study is to further investigate the ultrastructural details of the surface of Bowman's membrane of the human cornea, using atomic force microscopy (AFM) images. One representative image acquired of Bowman's membrane of a human cornea was investigated. The three-dimensional (3-D) surface of the sample was imaged using AFM in contact mode, while the sample was completely submerged in optisol solution. Height and deflection images were acquired at multiple scan lengths using the MFP-3D AFM system software (Asylum Research, Santa Barbara, CA), based in IGOR Pro (WaveMetrics, Lake Oswego, OR). A novel approach, based on computational algorithms for fractal analysis of surfaces applied for AFM data, was utilized to analyze the surface structure. The surfaces revealed a fractal structure at the nanometer scale. The fractal dimension, D, provided quantitative values that characterize the scale properties of surface geometry. Detailed characterization of the surface topography was obtained using statistical parameters, in accordance with ISO 25178-2: 2012. Results obtained by fractal analysis confirm the relationship between the value of the fractal dimension and the statistical surface roughness parameters. The surface structure of Bowman's membrane of the human cornea is complex. The analyzed AFM images confirm a fractal nature of the surface, which is not taken into account by classical surface statistical parameters. Surface fractal dimension could be useful in ophthalmology to quantify corneal architectural changes associated with different disease states to further our understanding of disease evolution.
Fractal analysis of scatter imaging signatures to distinguish breast pathologies
NASA Astrophysics Data System (ADS)
Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.
2013-02-01
Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.
Analysis of fractal dimensions of rat bones from film and digital images
NASA Technical Reports Server (NTRS)
Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Yamauchi, M.
2001-01-01
OBJECTIVES: (1) To compare the effect of two different intra-oral image receptors on estimates of fractal dimension; and (2) to determine the variations in fractal dimensions between the femur, tibia and humerus of the rat and between their proximal, middle and distal regions. METHODS: The left femur, tibia and humerus from 24 4-6-month-old Sprague-Dawley rats were radiographed using intra-oral film and a charge-coupled device (CCD). Films were digitized at a pixel density comparable to the CCD using a flat-bed scanner. Square regions of interest were selected from proximal, middle, and distal regions of each bone. Fractal dimensions were estimated from the slope of regression lines fitted to plots of log power against log spatial frequency. RESULTS: The fractal dimensions estimates from digitized films were significantly greater than those produced from the CCD (P=0.0008). Estimated fractal dimensions of three types of bone were not significantly different (P=0.0544); however, the three regions of bones were significantly different (P=0.0239). The fractal dimensions estimated from radiographs of the proximal and distal regions of the bones were lower than comparable estimates obtained from the middle region. CONCLUSIONS: Different types of image receptors significantly affect estimates of fractal dimension. There was no difference in the fractal dimensions of the different bones but the three regions differed significantly.
Intelligent fuzzy approach for fast fractal image compression
NASA Astrophysics Data System (ADS)
Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila
2014-12-01
Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.
Loh, N. Duane
2012-06-20
This deposition includes the aerosol diffraction images used for phasing, fractal morphology, and time-of-flight mass spectrometry. Files in this deposition are ordered in subdirectories that reflect the specifics.
NASA Astrophysics Data System (ADS)
Habibi, Ali
1993-01-01
The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.
NASA Astrophysics Data System (ADS)
Alonso, Carmelo; Tarquis, Ana M.; Zúñiga, Ignacio; Benito, Rosa M.
2017-03-01
Several studies have shown that vegetation indexes can be used to estimate root zone soil moisture. Earth surface images, obtained by high-resolution satellites, presently give a lot of information on these indexes, based on the data of several wavelengths. Because of the potential capacity for systematic observations at various scales, remote sensing technology extends the possible data archives from the present time to several decades back. Because of this advantage, enormous efforts have been made by researchers and application specialists to delineate vegetation indexes from local scale to global scale by applying remote sensing imagery. In this work, four band images have been considered, which are involved in these vegetation indexes, and were taken by satellites Ikonos-2 and Landsat-7 of the same geographic location, to study the effect of both spatial (pixel size) and radiometric (number of bits coding the image) resolution on these wavelength bands as well as two vegetation indexes: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). In order to do so, a multi-fractal analysis of these multi-spectral images was applied in each of these bands and the two indexes derived. The results showed that spatial resolution has a similar scaling effect in the four bands, but radiometric resolution has a larger influence in blue and green bands than in red and near-infrared bands. The NDVI showed a higher sensitivity to the radiometric resolution than EVI. Both were equally affected by the spatial resolution. From both factors, the spatial resolution has a major impact in the multi-fractal spectrum for all the bands and the vegetation indexes. This information should be taken in to account when vegetation indexes based on different satellite sensors are obtained.
Border extrapolation using fractal attributes in remote sensing images
NASA Astrophysics Data System (ADS)
Cipolletti, M. P.; Delrieux, C. A.; Perillo, G. M. E.; Piccolo, M. C.
2014-01-01
In management, monitoring and rational use of natural resources the knowledge of precise and updated information is essential. Satellite images have become an attractive option for quantitative data extraction and morphologic studies, assuring a wide coverage without exerting negative environmental influence over the study area. However, the precision of such practice is limited by the spatial resolution of the sensors and the additional processing algorithms. The use of high resolution imagery (i.e., Ikonos) is very expensive for studies involving large geographic areas or requiring long term monitoring, while the use of less expensive or freely available imagery poses a limit in the geographic accuracy and physical precision that may be obtained. We developed a methodology for accurate border estimation that can be used for establishing high quality measurements with low resolution imagery. The method is based on the original theory by Richardson, taking advantage of the fractal nature of geographic features. The area of interest is downsampled at different scales and, at each scale, the border is segmented and measured. Finally, a regression of the dependence of the measured length with respect to scale is computed, which then allows for a precise extrapolation of the expected length at scales much finer than the originally available. The method is tested with both synthetic and satellite imagery, producing accurate results in both cases.
Simple fractal method of assessment of histological images for application in medical diagnostics
2010-01-01
We propose new method of assessment of histological images for medical diagnostics. 2-D image is preprocessed to form 1-D landscapes or 1-D signature of the image contour and then their complexity is analyzed using Higuchi's fractal dimension method. The method may have broad medical application, from choosing implant materials to differentiation between benign masses and malignant breast tumors. PMID:21134258
Local fuzzy fractal dimension and its application in medical image processing.
Zhuang, Xiaodong; Meng, Qingchun
2004-09-01
The local fuzzy fractal dimension (LFFD) is proposed to extract local fractal feature of medical images. The definition of LFFD is an extension of the pixel-covering method by incorporating the fuzzy set. Multi-feature edge detection is implemented with the LFFD and the Sobel operator. The LFFD can also serve as a characteristic of motion in medical image sequences. The experimental results show that the LFFD is an important feature of edge areas in medical images and can provide information for segmentation of echocardiogram image sequences.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Fractal scaling of apparent soil moisture estimated from vertical planes of Vertisol pit images
NASA Astrophysics Data System (ADS)
Cumbrera, Ramiro; Tarquis, Ana M.; Gascó, Gabriel; Millán, Humberto
2012-07-01
SummaryImage analysis could be a useful tool for investigating the spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to define apparent soil moisture patterns from vertical planes of Vertisol pit images and (ii) to describe the scaling of apparent soil moisture distribution using fractal parameters. Twelve soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. Six of them were excavated in April/2011 and six pits were established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak™ digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ≈373 μm of the photographed soil pit. Each soil image was analyzed using two fractal scaling exponents, box counting (capacity) dimension (DBC) and interface fractal dimension (Di), and three prefractal scaling coefficients, the total number of boxes intercepting the foreground pattern at a unit scale (A), fractal lacunarity at the unit scale (Λ1) and Shannon entropy at the unit scale (S1). All the scaling parameters identified significant differences between both sets of spatial patterns. Fractal lacunarity was the best discriminator between apparent soil moisture patterns. Soil image interpretation with fractal exponents and prefractal coefficients can be incorporated within a site-specific agriculture toolbox. While fractal exponents convey information on space filling characteristics of the pattern, prefractal coefficients represent the investigated soil property as seen through a higher resolution microscope. In spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used in connection with traditional soil moisture sampling, which always renders punctual estimates.
Image coding with geometric wavelets.
Alani, Dror; Averbuch, Amir; Dekel, Shai
2007-01-01
This paper describes a new and efficient method for low bit-rate image coding which is based on recent development in the theory of multivariate nonlinear piecewise polynomial approximation. It combines a binary space partition scheme with geometric wavelet (GW) tree approximation so as to efficiently capture curve singularities and provide a sparse representation of the image. The GW method successfully competes with state-of-the-art wavelet methods such as the EZW, SPIHT, and EBCOT algorithms. We report a gain of about 0.4 dB over the SPIHT and EBCOT algorithms at the bit-rate 0.0625 bits-per-pixels (bpp). It also outperforms other recent methods that are based on "sparse geometric representation." For example, we report a gain of 0.27 dB over the Bandelets algorithm at 0.1 bpp. Although the algorithm is computationally intensive, its time complexity can be significantely reduced by collecting a "global" GW n-term approximation to the image from a collection of GW trees, each constructed separately over tiles of the image.
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
Document image retrieval through word shape coding.
Lu, Shijian; Li, Linlin; Tan, Chew Lim
2008-11-01
This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.
Efficiency of a model human image code
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1987-01-01
Hypothetical schemes for neural representation of visual information can be expressed as explicit image codes. Here, a code modeled on the simple cells of the primate striate cortex is explored. The Cortex transform maps a digital image into a set of subimages (layers) that are bandpass in spatial frequency and orientation. The layers are sampled so as to minimize the number of samples and still avoid aliasing. Samples are quantized in a manner that exploits the bandpass contrast-masking properties of human vision. The entropy of the samples is computed to provide a lower bound on the code size. Finally, the image is reconstructed from the code. Psychophysical methods are derived for comparing the original and reconstructed images to evaluate the sufficiency of the code. When each resolution is coded at the threshold for detection artifacts, the image-code size is about 1 bit/pixel.
Improved triangular prism methods for fractal analysis of remotely sensed images
NASA Astrophysics Data System (ADS)
Zhou, Yu; Fung, Tung; Leung, Yee
2016-05-01
Feature extraction has been a major area of research in remote sensing, and fractal feature is a natural characterization of complex objects across scales. Extending on the modified triangular prism (MTP) method, we systematically discuss three factors closely related to the estimation of fractal dimensions of remotely sensed images. They are namely the (F1) number of steps, (F2) step size, and (F3) estimation accuracy of the facets' areas of the triangular prisms. Differing from the existing improved algorithms that separately consider these factors, we simultaneously take all factors to construct three new algorithms, namely the modification of the eight-pixel algorithm, the four corner and the moving-average MTP. Numerical experiments based on 4000 generated images show their superior performances over existing algorithms: our algorithms not only overcome the limitation of image size suffered by existing algorithms but also obtain similar average fractal dimension with smaller standard deviation, only 50% for images with high fractal dimensions. In the case of real-life application, our algorithms more likely obtain fractal dimensions within the theoretical range. Thus, the fractal nature uncovered by our algorithms is more reasonable in quantifying the complexity of remotely sensed images. Despite the similar performance of these three new algorithms, the moving-average MTP can mitigate the sensitivity of the MTP to noise and extreme values. Based on the numerical and real-life case study, we check the effect of the three factors, (F1)-(F3), and demonstrate that these three factors can be simultaneously considered for improving the performance of the MTP method.
Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images
de Castro, J.; Méndez, A.; Tarquis, A. M.
2014-01-01
The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results. PMID:25114957
Buried mine detection using fractal geometry analysis to the LWIR successive line scan data image
NASA Astrophysics Data System (ADS)
Araki, Kan
2012-06-01
We have engaged in research on buried mine/IED detection by remote sensing method using LWIR camera. A IR image of a ground, containing buried objects can be assumed as a superimposed pattern including thermal scattering which may depend on the ground surface roughness, vegetation canopy, and effect of the sun light, and radiation due to various heat interaction caused by differences in specific heat, size, and buried depth of the objects and local temperature of their surrounding environment. In this cumbersome environment, we introduce fractal geometry for analyzing from an IR image. Clutter patterns due to these complex elements have oftentimes low ordered fractal dimension of Hausdorff Dimension. On the other hand, the target patterns have its tendency of obtaining higher ordered fractal dimension in terms of Information Dimension. Random Shuffle Surrogate method or Fourier Transform Surrogate method is used to evaluate fractional statistics by applying shuffle of time sequence data or phase of spectrum. Fractal interpolation to each line scan was also applied to improve the signal processing performance in order to evade zero division and enhance information of data. Some results of target extraction by using relationship between low and high ordered fractal dimension are to be presented.
ERIC Educational Resources Information Center
Barton, Ray
1990-01-01
Presented is an educational game called "The Chaos Game" which produces complicated fractal images. Two basic computer programs are included. The production of fractal images by the Sierpinski gasket and the Chaos Game programs is discussed. (CW)
Li, Hui-Ying; Du, Xiao-Ming; Yang, Bin; Wu, Bin; Xu, Zhu; Shi, Yi; Fang, Ji-Dun; Li, Fa-Sheng
2013-11-01
Dyes are frequently used to visualize fingering flow pathways, where the image process has an important role in the result analysis. The theory of fractal geometry is applied to give quantitative description of the stain patterns via image analysis, which is helpful for finger characterization and prediction. This description typically involves two parameters, a mass fractal dimension (D(m)) relative to the area, and a surface fractal dimension (D(s)) relative to the perimeter. This work detailed analyzes the influence of various choices during the thresholding step that transformed the origin color images to binary ones which are needed in the fractal analysis. One hundred and thirty images were obtained from laboratory two-dimension sand box infiltration experiments of four dyed non-aqueous phase liquids. Detailed comparisons of D(m) and D(s) were made respectively, considering a set of threshold algorithms and the filling of lakes. Results indicate that adjustments of the saturation threshold influence are less on both D(m) and D(s) in the laboratory experiments. The brightness threshold adjustments decrease the D(m) by 0.02 and increase the D(s) by 0.05. Filling lakes influence the D(m) less while the D(s) decrease by 0.10. Therefore the D(m) was recommended for further analysis to avoid subjective choices' influence in the image process.
Scalable still image coding based on wavelet
NASA Astrophysics Data System (ADS)
Yan, Yang; Zhang, Zhengbing
2005-02-01
The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.
Subband coding for image data archiving
NASA Technical Reports Server (NTRS)
Glover, Daniel; Kwatra, S. C.
1993-01-01
The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.
Subband coding for image data archiving
NASA Technical Reports Server (NTRS)
Glover, D.; Kwatra, S. C.
1992-01-01
The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.
On the fractal geometry of DNA by the binary image analysis.
Cattani, Carlo; Pierro, Gaetano
2013-09-01
The multifractal analysis of binary images of DNA is studied in order to define a methodological approach to the classification of DNA sequences. This method is based on the computation of some multifractality parameters on a suitable binary image of DNA, which takes into account the nucleotide distribution. The binary image of DNA is obtained by a dot-plot (recurrence plot) of the indicator matrix. The fractal geometry of these images is characterized by fractal dimension (FD), lacunarity, and succolarity. These parameters are compared with some other coefficients such as complexity and Shannon information entropy. It will be shown that the complexity parameters are more or less equivalent to FD, while the parameters of multifractality have different values in the sense that sequences with higher FD might have lower lacunarity and/or succolarity. In particular, the genome of Drosophila melanogaster has been considered by focusing on the chromosome 3r, which shows the highest fractality with a corresponding higher level of complexity. We will single out some results on the nucleotide distribution in 3r with respect to complexity and fractality. In particular, we will show that sequences with higher FD also have a higher frequency distribution of guanine, while low FD is characterized by the higher presence of adenine.
NASA Astrophysics Data System (ADS)
Sau Fa, Kwok
2017-03-01
Recently, fractal and generalized Fokker–Planck equations have been the subject of considerable interest. In this work, the fractal and generalized Fokker–Planck equations connected with the Langevin equation and continuous time random walk model are considered. Descriptions and applications of these models to the fixed samples of the mouse brain using magnetic resonance imaging (MRI) are discussed.
Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors.
da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez
2015-01-01
The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification.
Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors
da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez
2015-01-01
The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification. PMID:26091501
Blockiness in JPEG-coded images
NASA Astrophysics Data System (ADS)
Meesters, Lydia; Martens, Jean-Bernard
1999-05-01
In two experiments, dissimilarity data and numerical scaling data were obtained to determine the underlying attributes of image quality in baseline sequential JPEG coded imags. Although several distortions were perceived, i.e., blockiness, ringing and blur, the subjective data for all attributes where highly correlated, so that image quality could approximately be described by one independent attribute. We therefore proceeded by developing an instrumental measure for one of these distortions, i.e., blockiness. In this paper a single-ended blockiness measure is proposed, i.e., one that uses only the coded image. Our approach is therefore fundamentally different from most image quality models that use both the original and the degraded image. The measure is based on detecting the low- amplitude edges that result from blocking and estimating the amplitudes. Because of the approximate 1D of the underlying psychological space, the proposed blockiness measure also predicts the image quality of sequential baseline coded JPEG images.
Linear distance coding for image classification.
Wang, Zilei; Feng, Jiashi; Yan, Shuicheng; Xi, Hongsheng
2013-02-01
The feature coding-pooling framework is shown to perform well in image classification tasks, because it can generate discriminative and robust image representations. The unavoidable information loss incurred by feature quantization in the coding process and the undesired dependence of pooling on the image spatial layout, however, may severely limit the classification. In this paper, we propose a linear distance coding (LDC) method to capture the discriminative information lost in traditional coding methods while simultaneously alleviating the dependence of pooling on the image spatial layout. The core of the LDC lies in transforming local features of an image into more discriminative distance vectors, where the robust image-to-class distance is employed. These distance vectors are further encoded into sparse codes to capture the salient features of the image. The LDC is theoretically and experimentally shown to be complementary to the traditional coding methods, and thus their combination can achieve higher classification accuracy. We demonstrate the effectiveness of LDC on six data sets, two of each of three types (specific object, scene, and general object), i.e., Flower 102 and PFID 61, Scene 15 and Indoor 67, Caltech 101 and Caltech 256. The results show that our method generally outperforms the traditional coding methods, and achieves or is comparable to the state-of-the-art performance on these data sets.
Advanced Imaging Optics Utilizing Wavefront Coding.
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.
An edge preserving differential image coding scheme
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1992-01-01
Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.
Code-excited linear predictive coding of multispectral MR images
NASA Astrophysics Data System (ADS)
Hu, Jian-Hong; Wang, Yao; Cahill, Patrick
1996-02-01
This paper reports a multispectral code excited linear predictive coding method for the compression of well-registered multispectral MR images. Different linear prediction models and the adaptation schemes have been compared. The method which uses forward adaptive autoregressive (AR) model has proven to achieve a good compromise between performance, complexity and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over non-overlapping square macroblocks. Each macro-block is further divided into several micro-blocks and, the best excitation signals for each microblock are determined through an analysis-by-synthesis procedure. To satisfy the high quality requirement for medical images, the error between the original images and the synthesized ones are further specified using a vector quantizer. The MFCELP method has been applied to 26 sets of clinical MR neuro images (20 slices/set, 3 spectral bands/slice, 256 by 256 pixels/image, 12 bits/pixel). It provides a significant improvement over the discrete cosine transform (DCT) based JPEG method, a wavelet transform based embedded zero-tree wavelet (EZW) coding method, as well as the MSARMA method we developed before.
The design of wavefront coded imaging system
NASA Astrophysics Data System (ADS)
Lan, Shun; Cen, Zhaofeng; Li, Xiaotong
2016-10-01
Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.
Adaptive discrete cosine transform based image coding
NASA Astrophysics Data System (ADS)
Hu, Neng-Chung; Luoh, Shyan-Wen
1996-04-01
In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.
Ultrasound strain imaging using Barker code
NASA Astrophysics Data System (ADS)
Peng, Hui; Tie, Juhong; Guo, Dequan
2017-01-01
Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.
Detecting abrupt dynamic change based on changes in the fractal properties of spatial images
NASA Astrophysics Data System (ADS)
Liu, Qunqun; He, Wenping; Gu, Bin; Jiang, Yundi
2016-08-01
Many abrupt climate change events often cannot be detected timely by conventional abrupt detection methods until a few years after these events have occurred. The reason for this lag in detection is that abundant and long-term observational data are required for accurate abrupt change detection by these methods, especially for the detection of a regime shift. So, these methods cannot help us understand and forecast the evolution of the climate system in a timely manner. Obviously, spatial images, generated by a coupled spatiotemporal dynamical model, contain more information about a dynamic system than a single time series, and we find that spatial images show the fractal properties. The fractal properties of spatial images can be quantitatively characterized by the Hurst exponent, which can be estimated by two-dimensional detrended fluctuation analysis (TD-DFA). Based on this, TD-DFA is used to detect an abrupt dynamic change of a coupled spatiotemporal model. The results show that the TD-DFA method can effectively detect abrupt parameter changes in the coupled model by monitoring the changing in the fractal properties of spatial images. The present method provides a new way for abrupt dynamic change detection, which can achieve timely and efficient abrupt change detection results.
Zheng, Xiuqing; Liao, Zhiwu; Hu, Shaoxiang; Li, Ming; Zhou, Jiliu
2013-01-01
NLMs is a state-of-art image denoising method; however, it sometimes oversmoothes anatomical features in low-dose CT (LDCT) imaging. In this paper, we propose a simple way to improve the spatial adaptivity (SA) of NLMs using pointwise fractal dimension (PWFD). Unlike existing fractal image dimensions that are computed on the whole images or blocks of images, the new PWFD, named pointwise box-counting dimension (PWBCD), is computed for each image pixel. PWBCD uses a fixed size local window centered at the considered image pixel to fit the different local structures of images. Then based on PWBCD, a new method that uses PWBCD to improve SA of NLMs directly is proposed. That is, PWBCD is combined with the weight of the difference between local comparison windows for NLMs. Smoothing results for test images and real sinograms show that PWBCD-NLMs with well-chosen parameters can preserve anatomical features better while suppressing the noises efficiently. In addition, PWBCD-NLMs also has better performance both in visual quality and peak signal to noise ratio (PSNR) than NLMs in LDCT imaging.
Foulsham, Tom; Kingstone, Alan
2010-04-07
The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance.
Alzheimer's Disease Detection in Brain Magnetic Resonance Images Using Multiscale Fractal Analysis
Lahmiri, Salim; Boukadoum, Mounir
2013-01-01
We present a new automated system for the detection of brain magnetic resonance images (MRI) affected by Alzheimer's disease (AD). The MRI is analyzed by means of multiscale analysis (MSA) to obtain its fractals at six different scales. The extracted fractals are used as features to differentiate healthy brain MRI from those of AD by a support vector machine (SVM) classifier. The result of classifying 93 brain MRIs consisting of 51 images of healthy brains and 42 of brains affected by AD, using leave-one-out cross-validation method, yielded 99.18% ± 0.01 classification accuracy, 100% sensitivity, and 98.20% ± 0.02 specificity. These results and a processing time of 5.64 seconds indicate that the proposed approach may be an efficient diagnostic aid for radiologists in the screening for AD. PMID:24967286
NASA Astrophysics Data System (ADS)
Ahammer, Helmut; DeVaney, Trevor T. J.
2004-03-01
The boundary of a fractal object, represented in a two-dimensional space, is theoretically a line with an infinitely small width. In digital images this boundary or contour is limited to the pixel resolution of the image and the width of the line commonly depends on the edge detection algorithm used. The Minkowski dimension was evaluated by using three different edge detection algorithms (Sobel, Roberts, and Laplace operator). These three operators were investigated because they are very widely used and because their edge detection result is very distinct concerning the line width. Very common fractals (Sierpinski carpet and Koch islands) were investigated as well as the binary images from a cancer invasion assay taken with a confocal laser scanning microscope. The fractal dimension is directly proportional to the width of the contour line and the fact, that in practice very often the investigated objects are fractals only within a limited resolution range is considered too.
Ahammer, Helmut; DeVaney, Trevor T J
2004-03-01
The boundary of a fractal object, represented in a two-dimensional space, is theoretically a line with an infinitely small width. In digital images this boundary or contour is limited to the pixel resolution of the image and the width of the line commonly depends on the edge detection algorithm used. The Minkowski dimension was evaluated by using three different edge detection algorithms (Sobel, Roberts, and Laplace operator). These three operators were investigated because they are very widely used and because their edge detection result is very distinct concerning the line width. Very common fractals (Sierpinski carpet and Koch islands) were investigated as well as the binary images from a cancer invasion assay taken with a confocal laser scanning microscope. The fractal dimension is directly proportional to the width of the contour line and the fact, that in practice very often the investigated objects are fractals only within a limited resolution range is considered too.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.
2016-10-01
Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the
Multi-shot compressed coded aperture imaging
NASA Astrophysics Data System (ADS)
Shao, Xiaopeng; Du, Juan; Wu, Tengfei; Jin, Zhenhua
2013-09-01
The classical methods of compressed coded aperture (CCA) still require an optical sensor with high resolution, although the sampling rate has broken the Nyquist sampling rate already. A novel architecture of multi-shot compressed coded aperture imaging (MCCAI) using a low resolution optical sensor is proposed, which is mainly based on the 4-f imaging system, combining with two spatial light modulators (SLM) to achieve the compressive imaging goal. The first SLM employed for random convolution is placed at the frequency spectrum plane of the 4-f imaging system, while the second SLM worked as a selecting filter is positioned in front of the optical sensor. By altering the random coded pattern of the second SLM and sampling, a couple of observations can be obtained by a low resolution optical sensor easily, and these observations will be combined mathematically and used to reconstruct the high resolution image. That is to say, MCCAI aims at realizing the super resolution imaging with multiple random samplings by using a low resolution optical sensor. To improve the computational imaging performance, total variation (TV) regularization is introduced into the super resolution reconstruction model to get rid of the artifacts, and alternating direction method of multipliers (ADM) is utilized to solve the optimal result efficiently. The results show that the MCCAI architecture is suitable for super resolution computational imaging using a much lower resolution optical sensor than traditional CCA imaging methods by capturing multiple frame images.
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Fractal evaluation of drug amorphicity from optical and scanning electron microscope images
NASA Astrophysics Data System (ADS)
Gavriloaia, Bogdan-Mihai G.; Vizireanu, Radu C.; Neamtu, Catalin I.; Gavriloaia, Gheorghe V.
2013-09-01
Amorphous materials are metastable, more reactive than the crystalline ones, and have to be evaluated before pharmaceutical compound formulation. Amorphicity is interpreted as a spatial chaos, and patterns of molecular aggregates of dexamethasone, D, were investigated in this paper by using fractal dimension, FD. Images having three magnifications of D were taken from an optical microscope, OM, and with eight magnifications, from a scanning electron microscope, SEM, were analyzed. The average FD for pattern irregularities of OM images was 1.538, and about 1.692 for SEM images. The FDs of the two kinds of images are less sensitive of threshold level. 3D images were shown to illustrate dependence of FD of threshold and magnification level. As a result, optical image of single scale is enough to characterize the drug amorphicity. As a result, the OM image at a single scale is enough to characterize the amorphicity of D.
Predictive depth coding of wavelet transformed images
NASA Astrophysics Data System (ADS)
Lehtinen, Joonas
1999-10-01
In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.
Image compression with embedded multiwavelet coding
NASA Astrophysics Data System (ADS)
Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay
1996-03-01
An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.
Computing Challenges in Coded Mask Imaging
NASA Technical Reports Server (NTRS)
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing
Badea, Alexandru Florin; Lupsor Platon, Monica; Crisan, Maria; Cattani, Carlo; Badea, Iulia; Pierro, Gaetano; Sannino, Gianpaolo; Baciut, Grigore
2013-01-01
The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD). It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of “real-time” elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology. PMID:23762183
Badea, Alexandru Florin; Lupsor Platon, Monica; Crisan, Maria; Cattani, Carlo; Badea, Iulia; Pierro, Gaetano; Sannino, Gianpaolo; Baciut, Grigore
2013-01-01
The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD). It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of "real-time" elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology.
NASA Astrophysics Data System (ADS)
Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis
2010-11-01
Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in
Vector lifting schemes for stereo image coding.
Kaaniche, Mounir; Benazza-Benyahia, Amel; Pesquet-Popescu, Béatrice; Pesquet, Jean-Christophe
2009-11-01
Many research efforts have been devoted to the improvement of stereo image coding techniques for storage or transmission. In this paper, we are mainly interested in lossy-to-lossless coding schemes for stereo images allowing progressive reconstruction. The most commonly used approaches for stereo compression are based on disparity compensation techniques. The basic principle involved in this technique first consists of estimating the disparity map. Then, one image is considered as a reference and the other is predicted in order to generate a residual image. In this paper, we propose a novel approach, based on vector lifting schemes (VLS), which offers the advantage of generating two compact multiresolution representations of the left and the right views. We present two versions of this new scheme. A theoretical analysis of the performance of the considered VLS is also conducted. Experimental results indicate a significant improvement using the proposed structures compared with conventional methods.
Tomomitsu, Tatsushi; Mimura, Hiroaki; Murase, Kenya; Tamada, Tsutomu; Sone, Teruki; Fukunaga, Masao
2005-06-20
Many analyses of bone microarchitecture using three-dimensional images of micro CT (microCT) have been reported recently. However, as extirpated bone is the subject of measurement on microCT, various kinds of information are not available clinically. Our aim is to evaluate usefulness of fractal dimension as an index of bone strength different from bone mineral density in in-vivo, to which microCT could not be applied. In this fundamental study, the relation between pixel size and the slice thickness of images was examined when fractal analysis was applied to clinical images. We examined 40 lumbar spine specimens extirpated from 16 male cadavers (30-88 years; mean age, 60.8 years). Three-dimensional images of the trabeculae of 150 slices were obtained by a microCT system under the following conditions: matrix size, 512 x 512; slice thickness, 23.2 em; and pixel size, 18.6 em. Based on images of 150 slices, images of four different matrix sizes and nine different slice thicknesses were made using public domain software (NIH Image). The threshold value for image binarization, and the relation between pixel size and the slice thickness of an image used for two-dimensional and three-dimensional fractal analyses were studied. In addition, the box counting method was used for fractal analysis. One hundred forty-five in box counting was most suitable as the threshold value for image binarization on the 256 gray levels. The correlation coefficients between two-dimensional fractal dimensions of processed images and three-dimensional fractal dimensions of original images were more than 0.9 for pixel sizes < or =148.8 microm at a slice thickness of 1 mm, and < or =74.4 microm at one of 2 mm. In terms of the relation between the three-dimensional fractal dimension of processed images and three-dimensional fractal dimension of original images, when pixel size was less than 74.4 microm, a correlation coefficient of more than 0.9 was obtained even for the maximal slice thickness
Fractal and digital image processing to determine the degree of dispersion of carbon nanotubes
NASA Astrophysics Data System (ADS)
Liang, Xiao-ning; Li, Wei
2016-05-01
The degree of dispersion is an important parameter to quantitatively study properties of carbon nanotube composites. Among the many methods for studying dispersion, scanning electron microscopy, transmission electron microscopy, and atomic force microscopy are the most commonly used, intuitive, and convincing methods. However, they have the disadvantage of not being quantitative. To overcome this disadvantage, the fractal theory and digital image processing method can be used to provide a quantitative analysis of the morphology and properties of carbon nanotube composites. In this paper, the dispersion degree of carbon nanotubes was investigated using two fractal methods, namely, the box-counting method and the differential box-counting method. On the basis of the results, we propose a new method for the quantitative characterization of the degree of dispersion of carbon nanotubes. This hierarchical grid method can be used as a supplementary method, and can be combined with the fractal calculation method. Thus, the accuracy and effectiveness of the quantitative characterization of the dispersion degree of carbon nanotubes can be improved. (The outer diameter of the carbon nanotubes is about 50 nm; the length of the carbon nanotubes is 10-20 μm.)
Coded Access Optical Sensor (CAOS) Imager
NASA Astrophysics Data System (ADS)
Riza, N. A.; Amin, M. J.; La Torre, J. P.
2015-04-01
High spatial resolution, low inter-pixel crosstalk, high signal-to-noise ratio (SNR), adequate application dependent speed, economical and energy efficient design are common goals sought after for optical image sensors. In optical microscopy, overcoming the diffraction limit in spatial resolution has been achieved using materials chemistry, optimal wavelengths, precision optics and nanomotion-mechanics for pixel-by-pixel scanning. Imagers based on pixelated imaging devices such as CCD/CMOS sensors avoid pixel-by-pixel scanning as all sensor pixels operate in parallel, but these imagers are fundamentally limited by inter-pixel crosstalk, in particular with interspersed bright and dim light zones. In this paper, we propose an agile pixel imager sensor design platform called Coded Access Optical Sensor (CAOS) that can greatly alleviate the mentioned fundamental limitations, empowering smart optical imaging for particular environments. Specifically, this novel CAOS imager engages an application dependent electronically programmable agile pixel platform using hybrid space-time-frequency coded multiple-access of the sampled optical irradiance map. We demonstrate the foundational working principles of the first experimental electronically programmable CAOS imager using hybrid time-frequency multiple access sampling of a known high contrast laser beam irradiance test map, with the CAOS instrument based on a Texas Instruments (TI) Digital Micromirror Device (DMD). This CAOS instrument provides imaging data that exhibits 77 dB electrical SNR and the measured laser beam image irradiance specifications closely match (i.e., within 0.75% error) the laser manufacturer provided beam image irradiance radius numbers. The proposed CAOS imager can be deployed in many scientific and non-scientific applications where pixel agility via electronic programmability can pull out desired features in an irradiance map subject to the CAOS imaging operation.
Zaia, Annamaria
2015-01-01
Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest
Zaia, Annamaria
2015-03-18
Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest
Digital Image Analysis for DETCHIP® Code Determination
Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.
2013-01-01
DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940
Coded-aperture imaging in nuclear medicine
NASA Technical Reports Server (NTRS)
Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.
1989-01-01
Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.
Coded-aperture imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.
1989-11-01
Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.
Image coding compression based on DCT
NASA Astrophysics Data System (ADS)
Feng, Fei; Liu, Peixue; Jiang, Baohua
2012-04-01
With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.
Fractal dimension of sparkles in automotive metallic coatings by multispectral imaging measurements.
Medina, José M; Díaz, José A; Vignolo, Carlos
2014-07-23
Sparkle in surface coatings is a property of mirror-like pigment particles that consists of remarkable bright spots over a darker surround under unidirectional illumination. We developed a novel nondestructive method to characterize sparkles based on the multispectral imaging technique, and we focused on automotive metallic coatings containing aluminum flake pigments. Multispectral imaging was done in the visible spectrum at different illumination angles around the test sample. Reflectance spectra at different spatial positions were mapped to color coordinates and visualized in different color spaces. Spectral analysis shows that sparkles exhibit higher reflectance spectra and narrower bandwidths. Colorimetric analysis indicates that sparkles present higher lightness values and are far apart from the bulk of color coordinates spanned by the surround. A box-counting procedure was applied to examine the fractal organization of color coordinates in the CIE 1976 L*a*b* color space. A characteristic noninteger exponent was found at each illumination position. The exponent was independent of the illuminant spectra. Together, these results demonstrate that sparkles are extreme deviations relative to the surround and that their spectral properties can be described as fractal patterns within the color space. Multispectral reflectance imaging provides a powerful, noninvasive method for spectral identification and classification of sparkles from metal flake pigments on the micron scale.
Angle closure glaucoma detection using fractal dimension index on SS-OCT images.
Ni, Soe Ni; Marzilianol, Pina; Wong, Hon-Tym
2014-01-01
Optical coherence tomography (OCT) is a high resolution, rapid and non-invasive screening tool for angle closure glaucoma. In this paper, we propose a new strategy for automatic and landmark invariant quantification of the anterior chamber angle of the eye using swept source optical coherence tomography (SS-OCT) images. Seven hundred and eight swept source optical coherence tomography SS-OCT images from 148 patients with average age of (59.48 ± 8.97) were analyzed in this study. The angle structure is measured by fractal dimension (FD) analysis to quantify the complexity or changes of angle recess. We evaluated the FD index with biometric parameters for classification of open angle and angle closure glaucoma. The proposed fractal dimension index gives a better representation of the angle configuration for capturing the nature of the angle dynamics involved in different forms of open and closed angle glaucoma (average FD (standard deviation): 1.944 (0.045) for open and 1.894 (0.043) for closed angle). It showed that the proposed approach has promising potential to become a computer aided diagnostic tool for angle closure glaucoma (ACG) disease.
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.
1997-01-01
The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.
Dim target detection in IR image sequences based on fractal and rough set theory
NASA Astrophysics Data System (ADS)
Yan, Xiaoke; Shi, Caicheng; He, Peikun
2005-11-01
This paper addresses the problem of detecting small, moving, low amplitude in image sequences that also contain moving nuisance objects and background noise. Rough sets (RS) theory is applied in similarity relation instead of equivalence relation to solve clustering issue. We propose fractal-based texture analysis to describe texture coarseness and locally adaptive threshold technique to seek latent object point. Finally, according to temporal and spatial correlations between different frames, the singular points can be filtered. We demonstrate the effectiveness of the technique by applying it to real infrared image sequences containing targets of opportunity and evolving cloud clutter. The experimental results show that the algorithm can effectively increase detection probability and has robustness.
Fast-neutron, coded-aperture imager
NASA Astrophysics Data System (ADS)
Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.
2015-06-01
This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led
NASA Astrophysics Data System (ADS)
Yang, Y.; Li, H. T.; Han, Y. S.; Gu, H. Y.
2015-06-01
Image segmentation is the foundation of further object-oriented image analysis, understanding and recognition. It is one of the key technologies in high resolution remote sensing applications. In this paper, a new fast image segmentation algorithm for high resolution remote sensing imagery is proposed, which is based on graph theory and fractal net evolution approach (FNEA). Firstly, an image is modelled as a weighted undirected graph, where nodes correspond to pixels, and edges connect adjacent pixels. An initial object layer can be obtained efficiently from graph-based segmentation, which runs in time nearly linear in the number of image pixels. Then FNEA starts with the initial object layer and a pairwise merge of its neighbour object with the aim to minimize the resulting summed heterogeneity. Furthermore, according to the character of different features in high resolution remote sensing image, three different merging criterions for image objects based on spectral and spatial information are adopted. Finally, compared with the commercial remote sensing software eCognition, the experimental results demonstrate that the efficiency of the algorithm has significantly improved, and the result can maintain good feature boundaries.
N'Diaye, Mambaye; Degeratu, Cristinel; Bouler, Jean-Michel; Chappard, Daniel
2013-05-01
Porous structures are becoming more and more important in biology and material science because they help in reducing the density of the grafted material. For biomaterials, porosity also increases the accessibility of cells and vessels inside the grafted area. However, descriptors of porosity are scanty. We have used a series of biomaterials with different types of porosity (created by various porogens: fibers, beads …). Blocks were studied by microcomputed tomography for the measurement of 3D porosity. 2D sections were re-sliced to analyze the microarchitecture of the pores and were transferred to image analysis programs: star volumes, interconnectivity index, Minkowski-Bouligand and Kolmogorov fractal dimensions were determined. Lacunarity and succolarity, two recently described fractal dimensions, were also computed. These parameters provided a precise description of porosity and pores' characteristics. Non-linear relationships were found between several descriptors e.g. succolarity and star volume of the material. A linear correlation was found between lacunarity and succolarity. These techniques appear suitable in the study of biomaterials usable as bone substitutes.
Mossotti, Victor G.; Eldeeb, A. Raouf
2000-01-01
Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.
Dual-sided coded-aperture imager
Ziock, Klaus-Peter
2009-09-22
In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.
Featured Image: Tests of an MHD Code
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-09-01
Creating the codes that are used to numerically model astrophysical systems takes a lot of work and a lot of testing! A new, publicly available moving-mesh magnetohydrodynamics (MHD) code, DISCO, is designed to model 2D and 3D orbital fluid motion, such as that of astrophysical disks. In a recent article, DISCO creator Paul Duffell (University of California, Berkeley) presents the code and the outcomes from a series of standard tests of DISCOs stability, accuracy, and scalability.From left to right and top to bottom, the test outputs shown above are: a cylindrical Kelvin-Helmholtz flow (showing off DISCOs numerical grid in 2D), a passive scalar in a smooth vortex (can DISCO maintain contact discontinuities?), a global look at the cylindrical Kelvin-Helmholtz flow, a Jupiter-mass planet opening a gap in a viscous disk, an MHD flywheel (a test of DISCOs stability), an MHD explosion revealing shock structures, an MHD rotor (a more challenging version of the explosion), a Flock 3D MRI test (can DISCO study linear growth of the magnetorotational instability in disks?), and a nonlinear 3D MRI test.Check out the gif below for a closer look at each of these images, or follow the link to the original article to see even more!CitationPaul C. Duffell 2016 ApJS 226 2. doi:10.3847/0067-0049/226/1/2
ERIC Educational Resources Information Center
Esbenshade, Donald H., Jr.
1991-01-01
Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)
Jitaree, Sirinapa; Phinyomark, Angkoon; Boonyaphiphat, Pleumjit; Phukpattaranont, Pornchai
2015-01-01
Having a classifier of cell types in a breast cancer microscopic image (BCMI), obtained with immunohistochemical staining, is required as part of a computer-aided system that counts the cancer cells in such BCMI. Such quantitation by cell counting is very useful in supporting decisions and planning of the medical treatment of breast cancer. This study proposes and evaluates features based on texture analysis by fractal dimension (FD), for the classification of histological structures in a BCMI into either cancer cells or non-cancer cells. The cancer cells include positive cells (PC) and negative cells (NC), while the normal cells comprise stromal cells (SC) and lymphocyte cells (LC). The FD feature values were calculated with the box-counting method from binarized images, obtained by automatic thresholding with Otsu's method of the grayscale images for various color channels. A total of 12 color channels from four color spaces (RGB, CIE-L*a*b*, HSV, and YCbCr) were investigated, and the FD feature values from them were used with decision tree classifiers. The BCMI data consisted of 1,400, 1,200, and 800 images with pixel resolutions 128 × 128, 192 × 192, and 256 × 256, respectively. The best cross-validated classification accuracy was 93.87%, for distinguishing between cancer and non-cancer cells, obtained using the Cr color channel with window size 256. The results indicate that the proposed algorithm, based on fractal dimension features extracted from a color channel, performs well in the automatic classification of the histology in a BCMI. This might support accurate automatic cell counting in a computer-assisted system for breast cancer diagnosis.
NASA Astrophysics Data System (ADS)
Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.
2011-09-01
For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.
NASA Astrophysics Data System (ADS)
Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.
2012-01-01
For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.
Coherent diffractive imaging using randomly coded masks
Seaberg, Matthew H.; D'Aspremont, Alexandre; Turner, Joshua J.
2015-12-07
We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even free electron laser experiments.
Coding depth perception from image defocus.
Supèr, Hans; Romeo, August
2014-12-01
As a result of the spider experiments in Nagata et al. (2012), it was hypothesized that the depth perception mechanisms of these animals should be based on how much images are defocused. In the present paper, assuming that relative chromatic aberrations or blur radii values are known, we develop a formulation relating the values of these cues to the actual depth distance. Taking into account the form of the resulting signals, we propose the use of latency coding from a spiking neuron obeying Izhikevich's 'simple model'. If spider jumps can be viewed as approximately parabolic, some estimates allow for a sensory-motor relation between the time to the first spike and the magnitude of the initial velocity of the jump.
NASA Astrophysics Data System (ADS)
Jia, Peng; Cai, Dongmei; Wang, Dong
2014-11-01
A parallel blind deconvolution algorithm is presented. The algorithm contains the constraints of the point spread function (PSF) derived from the physical process of the imaging. Additionally, in order to obtain an effective restored image, the fractal energy ratio is used as an evaluation criterion to estimate the quality of the image. This algorithm is fine-grained parallelized to increase the calculation speed. Results of numerical experiments and real experiments indicate that this algorithm is effective.
Two Fibonacci P-code based image scrambling algorithms
NASA Astrophysics Data System (ADS)
Zhou, Yicong; Agaian, Sos; Joyner, Valencia M.; Panetta, Karen
2008-02-01
Image scrambling is used to make images visually unrecognizable such that unauthorized users have difficulty decoding the scrambled image to access the original image. This article presents two new image scrambling algorithms based on Fibonacci p-code, a parametric sequence. The first algorithm works in spatial domain and the second in frequency domain (including JPEG domain). A parameter, p, is used as a security-key and has many possible choices to guarantee the high security of the scrambled images. The presented algorithms can be implemented for encoding/decoding both in full and partial image scrambling, and can be used in real-time applications, such as image data hiding and encryption. Examples of image scrambling are provided. Computer simulations are shown to demonstrate that the presented methods also have good performance in common image attacks such as cutting (data loss), compression and noise. The new scrambling methods can be implemented on grey level images and 3-color components in color images. A new Lucas p-code is also introduced. The scrambling images based on Fibonacci p-code are also compared to the scrambling results of classic Fibonacci number and Lucas p-code. This will demonstrate that the classical Fibonacci number is a special sequence of Fibonacci p-code and show the different scrambling results of Fibonacci p-code and Lucas p-code.
PynPoint code for exoplanet imaging
NASA Astrophysics Data System (ADS)
Amara, A.; Quanz, S. P.; Akeret, J.
2015-04-01
We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).
Recent advances in CZT strip detectors and coded mask imagers
NASA Astrophysics Data System (ADS)
Matteson, J. L.; Gruber, D. E.; Heindl, W. A.; Pelling, M. R.; Peterson, L. E.; Rothschild, R. E.; Skelton, R. T.; Hink, P. L.; Slavis, K. R.; Binns, W. R.; Tumer, T.; Visser, G.
1999-09-01
The UCSD, WU, UCR and Nova collaboration has made significant progress on the necessary techniques for coded mask imaging of gamma-ray bursts: position sensitive CZT detectors with good energy resolution, ASIC readout, coded mask imaging, and background properties at balloon altitudes. Results on coded mask imaging techniques appropriate for wide field imaging and localization of gamma-ray bursts are presented, including a shadowgram and deconvolved image taken with a prototype detector/ASIC and MURA mask. This research was supported by NASA Grants NAG5-5111, NAG5-5114, and NGT5-50170.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Coded excitation for diverging wave cardiac imaging: a feasibility study
NASA Astrophysics Data System (ADS)
Zhao, Feifei; Tong, Ling; He, Qiong; Luo, Jianwen
2017-02-01
Diverging wave (DW) based cardiac imaging has gained increasing interest in recent years given its capacity to achieve ultrahigh frame rate. However, the signal-to-noise ratio (SNR), contrast, and penetration depth of the resulting B-mode images are typically low as DWs spread energy over a large region. Coded excitation is known to be capable of increasing the SNR and penetration for ultrasound imaging. The aim of this study was therefore to test the feasibility of applying coded excitation in DW imaging to improve the corresponding SNR, contrast and penetration depth. To this end, two types of codes, i.e. a linear frequency modulated chirp code and a set of complementary Golay codes were tested in three different DW imaging schemes, i.e. 1 angle DW transmit without compounding, 3 and 5 angles DW transmits with coherent compounding. The performances (SNR, contrast ratio (CR), contrast-to-noise ratio (CNR), and penetration) of different imaging schemes were investigated by means of simulations and in vitro experiments. As for benchmark, corresponding DW imaging schemes with regular pulsed excitation as well as the conventional focused imaging scheme were also included. The results showed that the SNR was improved by about 10 dB using coded excitation while the penetration depth was increased by 2.5 cm and 1.8 cm using chirp code and Golay codes, respectively. The CNR and CR gains varied with the depth for different DW schemes using coded excitations. Specifically, for non-compounded DW imaging schemes, the gain in the CR was about 5 dB and 3 dB while the gain in the CNR was about 4.5 dB and 3.5 dB at larger depths using chirp code and Golay codes, respectively. For compounded imaging schemes, using coded excitation, the gain in the penetration and contrast were relatively smaller compared to non-compounded ones. Overall, these findings indicated the feasibility of coded excitation in improving the image quality of DW imaging. Preliminary in vivo cardiac images
Coded excitation plane wave imaging for shear wave motion detection.
Song, Pengfei; Urban, Matthew W; Manduca, Armando; Greenleaf, James F; Chen, Shigao
2015-07-01
Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave SNR compared with conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2 to 4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (body mass index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue.
NASA Astrophysics Data System (ADS)
Pepe, S.; Di Martino, G.; Iodice, A.; Manzo, M.; Pepe, A.; Riccio, D.; Ruello, G.; Sansosti, E.; Tizzani, P.; Zinno, I.
2012-04-01
In the last two decades several aspects relevant to volcanic activity have been analyzed in terms of fractal parameters that effectively describe natural objects geometry. More specifically, these researches have been aimed at the identification of (1) the power laws that governed the magma fragmentation processes, (2) the energy of explosive eruptions, and (3) the distribution of the associated earthquakes. In this paper, the study of volcano morphology via satellite images is dealt with; in particular, we use the complete forward model developed by some of the authors (Di Martino et al., 2012) that links the stochastic characterization of amplitude Synthetic Aperture Radar (SAR) images to the fractal dimension of the imaged surfaces, modelled via fractional Brownian motion (fBm) processes. Based on the inversion of such a model, a SAR image post-processing has been implemented (Di Martino et al., 2010), that allows retrieving the fractal dimension of the observed surfaces, dictating the distribution of the roughness over different spatial scales. The fractal dimension of volcanic structures has been related to the specific nature of materials and to the effects of active geodynamic processes. Hence, the possibility to estimate the fractal dimension from a single amplitude-only SAR image is of fundamental importance for the characterization of volcano structures and, moreover, can be very helpful for monitoring and crisis management activities in case of eruptions and other similar natural hazards. The implemented SAR image processing performs the extraction of the point-by-point fractal dimension of the scene observed by the sensor, providing - as an output product - the map of the fractal dimension of the area of interest. In this work, such an analysis is performed on Cosmo-SkyMed, ERS-1/2 and ENVISAT images relevant to active stratovolcanoes in different geodynamic contexts, such as Mt. Somma-Vesuvio, Mt. Etna, Vulcano and Stromboli in Southern Italy, Shinmoe
Efficient image compression scheme based on differential coding
NASA Astrophysics Data System (ADS)
Zhu, Li; Wang, Guoyou; Liu, Ying
2007-11-01
Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.
Rank minimization code aperture design for spectrally selective compressive imaging.
Arguello, Henry; Arce, Gonzalo R
2013-03-01
A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.
Practicing the Code of Ethics, finding the image of God.
Hoglund, Barbara A
2013-01-01
The Code of Ethics for Nurses gives a professional obligation to practice in a compassionate and respectful way that is unaffected by the attributes of the patient. This article explores the concept "made in the image of God" and the complexities inherent in caring for those perceived as exhibiting distorted images of God. While the Code provides a professional standard consistent with a biblical worldview, human nature impacts the ability to consistently act congruently with the Code. Strategies and nursing interventions that support development of practice from a biblical worldview and the Code of Ethics for Nurses are presented.
FRACTAL DIMENSION OF GALAXY ISOPHOTES
Thanki, Sandip; Rhee, George; Lepp, Stephen E-mail: grhee@physics.unlv.edu
2009-09-15
In this paper we investigate the use of the fractal dimension of galaxy isophotes in galaxy classification. We have applied two different methods for determining fractal dimensions to the isophotes of elliptical and spiral galaxies derived from CCD images. We conclude that fractal dimension alone is not a reliable tool but that combined with other parameters in a neural net algorithm the fractal dimension could be of use. In particular, we have used three parameters to segregate the ellipticals and lenticulars from the spiral galaxies in our sample. These three parameters are the correlation fractal dimension D {sub corr}, the difference between the correlation fractal dimension and the capacity fractal dimension D {sub corr} - D {sub cap}, and, thirdly, the B - V color of the galaxy.
Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications
Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth
2013-06-01
Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.
Fractals in biology and medicine
NASA Technical Reports Server (NTRS)
Havlin, S.; Buldyrev, S. V.; Goldberger, A. L.; Mantegna, R. N.; Ossadnik, S. M.; Peng, C. K.; Simons, M.; Stanley, H. E.
1995-01-01
Our purpose is to describe some recent progress in applying fractal concepts to systems of relevance to biology and medicine. We review several biological systems characterized by fractal geometry, with a particular focus on the long-range power-law correlations found recently in DNA sequences containing noncoding material. Furthermore, we discuss the finding that the exponent alpha quantifying these long-range correlations ("fractal complexity") is smaller for coding than for noncoding sequences. We also discuss the application of fractal scaling analysis to the dynamics of heartbeat regulation, and report the recent finding that the normal heart is characterized by long-range "anticorrelations" which are absent in the diseased heart.
MR image compression using a wavelet transform coding algorithm.
Angelidis, P A
1994-01-01
We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.
ERIC Educational Resources Information Center
Simoson, Andrew J.
2009-01-01
This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)
Radlinski, A.P.; Radlinska, E.Z.; Agamalian, M.; Wignall, G.D.; Lindner, P.; Randl, O.G.
1999-04-01
The analysis of small- and ultra-small-angle neutron scattering data for sedimentary rocks shows that the pore-rock fabric interface is a surface fractal (D{sub s}=2.82) over 3 orders of magnitude of the length scale and 10 orders of magnitude in intensity. The fractal dimension and scatterer size obtained from scanning electron microscopy image processing are consistent with neutron scattering data. {copyright} {ital 1999} {ital The American Physical Society}
Hybrid coded aperture and Compton imaging using an active mask
NASA Astrophysics Data System (ADS)
Schultz, L. J.; Wallace, M. S.; Galassi, M. C.; Hoover, A. S.; Mocko, M.; Palmer, D. M.; Tornga, S. R.; Kippen, R. M.; Hynes, M. V.; Toolin, M. J.; Harris, B.; McElroy, J. E.; Wakeford, D.; Lanza, R. C.; Horn, B. K. P.; Wehe, D. K.
2009-09-01
The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.
Perceptually lossless coding of digital monochrome ultrasound images
NASA Astrophysics Data System (ADS)
Wu, David; Tan, Damian M.; Griffiths, Tania; Wu, Hong Ren
2005-07-01
A preliminary investigation of encoding monochrome ultrasound images with a novel perceptually lossless coder is presented. Based on the JPEG 2000 coding framework, the proposed coder employs a vision model to identify and remove visually insignificant/irrelevant information. Current simulation results have shown coding performance gains over the JPEG compliant LOCO lossless and JPEG 2000 lossless coders without any perceivable distortion.
Multichannel Linear Predictive Coding of Color Images,
1984-01-01
single- An alternative may of =oeling z(n,n) wmul output AniM , as described in 11,21, at me be to autoregressively model each channel average...being minimum shoulders Image with well definte tao.r&. The phase, where*6* dt-* d .~ ,s #% terminenst of a binary image of Fig. 2(d). howver. rinws
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
Coded Excitation Plane Wave Imaging for Shear Wave Motion Detection
Song, Pengfei; Urban, Matthew W.; Manduca, Armando; Greenleaf, James F.; Chen, Shigao
2015-01-01
Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave signal-to-noise-ratio (SNR) compared to conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2-4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (Body Mass Index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue. PMID:26168181
Garrison, J.R., Jr.; Pearn, W.C.; von Rosenberg, D. W. )
1992-01-01
In this paper reservoir rock/pore systems are considered natural fractal objects and modeled as and compared to the regular fractal Menger Sponge and Sierpinski Carpet. The physical properties of a porous rock are, in part, controlled by the geometry of the pore system. The rate at which a fluid or electrical current can travel through the pore system of a rock is controlled by the path along which it must travel. This path is a subset of the overall geometry of the pore system. Reservoir rocks exhibit self-similarity over a range of length scales suggesting that fractal geometry offers a means of characterizing these complex objects. The overall geometry of a rock/pore system can be described, conveniently and concisely, in terms of effective fractal dimensions. The rock/pore system is modeled as the fractal Menger Sponge. A cross section through the rock/pore system, such as an image of a thin-section of a rock, is modeled as the fractal Sierpinski Carpet, which is equivalent to the face of the Menger Sponge.
JPEG backward compatible coding of omnidirectional images
NASA Astrophysics Data System (ADS)
Řeřábek, Martin; Upenik, Evgeniy; Ebrahimi, Touradj
2016-09-01
Omnidirectional image and video, also known as 360 image and 360 video, are gaining in popularity with the recent growth in availability of cameras and displays that can cope with such type of content. As omnidirectional visual content represents a larger set of information about the scene, it typically requires a much larger volume of information. Efficient compression of such content is therefore important. In this paper, we review the state of the art in compression of omnidirectional visual content, and propose a novel approach to encode omnidirectional images in such a way that they are still viewable on legacy JPEG decoders.
Code-modulated interferometric imaging system using phased arrays
NASA Astrophysics Data System (ADS)
Chauhan, Vikas; Greene, Kevin; Floyd, Brian
2016-05-01
Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.
Aggregating local image descriptors into compact codes.
Jégou, Hervé; Perronnin, Florent; Douze, Matthijs; Sánchez, Jorge; Pérez, Patrick; Schmid, Cordelia
2012-09-01
This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
Efficient coding of wavelet trees and its applications in image coding
NASA Astrophysics Data System (ADS)
Zhu, Bin; Yang, En-hui; Tewfik, Ahmed H.; Kieffer, John C.
1996-02-01
We propose in this paper a novel lossless tree coding algorithm. The technique is a direct extension of the bisection method, the simplest case of the complexity reduction method proposed recently by Kieffer and Yang, that has been used for lossless data string coding. A reduction rule is used to obtain the irreducible representation of a tree, and this irreducible tree is entropy-coded instead of the input tree itself. This reduction is reversible, and the original tree can be fully recovered from its irreducible representation. More specifically, we search for equivalent subtrees from top to bottom. When equivalent subtrees are found, a special symbol is appended to the value of the root node of the first equivalent subtree, and the root node of the second subtree is assigned to the index which points to the first subtree, an all other nodes in the second subtrees are removed. This procedure is repeated until it cannot be reduced further. This yields the irreducible tree or irreducible representation of the original tree. The proposed method can effectively remove the redundancy in an image, and results in more efficient compression. It is proved that when the tree size approaches infinity, the proposed method offers the optimal compression performance. It is generally more efficient in practice than direct coding of the input tree. The proposed method can be directly applied to code wavelet trees in non-iterative wavelet-based image coding schemes. A modified method is also proposed for coding wavelet zerotrees in embedded zerotree wavelet (EZW) image coding. Although its coding efficiency is slightly reduced, the modified version maintains exact control of bit rate and the scalability of the bit stream in EZW coding.
NASA Astrophysics Data System (ADS)
Liu, Yangyang; Lv, Qunbo; Li, Weiyan; Xiangli, Bin
2015-09-01
As a novel spectrum imaging technology was developed recent years, push-broom coded aperture spectral imaging (PCASI) has the advantages of high throughput, high SNR, high stability etc. This coded aperture spectral imaging utilizes fixed code templates and push-broom mode, which can realize the high-precision reconstruction of spatial and spectral information. But during optical lens designing, manufacturing and debugging, it is inevitably exist some minor coma errors. Even minor coma errors can reduce image quality. In this paper, we simulated the system optical coma error's influence to the quality of reconstructed image, analyzed the variant of the coded aperture in different optical coma effect, then proposed an accurate curve of image quality and optical coma quality in 255×255 size code template, which provide important references for design and development of push-broom coded aperture spectrometer.
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
Adaptive coded aperture imaging: progress and potential future applications
NASA Astrophysics Data System (ADS)
Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.
2011-09-01
Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).
NASA Astrophysics Data System (ADS)
Beech, M.
1989-02-01
The author discusses some of the more recent research on fractal astronomy and results presented in several astronomical studies. First, the large-scale structure of the universe is considered, while in another section one drops in scale to examine some of the smallest bodies in our solar system; the comets and meteoroids. The final section presents some thoughts on what influence the fractal ideology might have on astronomy, focusing particularly on the question recently raised by Kadanoff, "Fractals: where's the physics?"
Coded aperture imaging for fluorescent x-rays
Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.
2014-06-15
We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.
Imaging The Genetic Code of a Virus
NASA Astrophysics Data System (ADS)
Graham, Jenna; Link, Justin
2013-03-01
Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University
Target Detection Using Fractal Geometry
NASA Technical Reports Server (NTRS)
Fuller, J. Joseph
1991-01-01
The concepts and theory of fractal geometry were applied to the problem of segmenting a 256 x 256 pixel image so that manmade objects could be extracted from natural backgrounds. The two most important measurements necessary to extract these manmade objects were fractal dimension and lacunarity. Provision was made to pass the manmade portion to a lookup table for subsequent identification. A computer program was written to construct cloud backgrounds of fractal dimensions which were allowed to vary between 2.2 and 2.8. Images of three model space targets were combined with these backgrounds to provide a data set for testing the validity of the approach. Once the data set was constructed, computer programs were written to extract estimates of the fractal dimension and lacunarity on 4 x 4 pixel subsets of the image. It was shown that for clouds of fractal dimension 2.7 or less, appropriate thresholding on fractal dimension and lacunarity yielded a 64 x 64 edge-detected image with all or most of the cloud background removed. These images were enhanced by an erosion and dilation to provide the final image passed to the lookup table. While the ultimate goal was to pass the final image to a neural network for identification, this work shows the applicability of fractal geometry to the problems of image segmentation, edge detection and separating a target of interest from a natural background.
The transience of virtual fractals.
Taylor, R P
2012-01-01
Artists have a long and fruitful tradition of exploiting electronic media to convert static images into dynamic images that evolve with time. Fractal patterns serve as an example: computers allow the observer to zoom in on virtual images and so experience the endless repetition of patterns in a matter that cannot be matched using static images. This year's featured cover artist, Susan Lowedermilk, instead plans to employ persistence of human vision to bring virtual fractals to life. This will be done by incorporating her prints of fractal patterns into zoetropes and phenakistoscopes.
Compressive imaging using fast transform coding
NASA Astrophysics Data System (ADS)
Thompson, Andrew; Calderbank, Robert
2016-10-01
We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
Hybrid Compton camera/coded aperture imaging system
Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA
2012-04-10
A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
A model of PSF estimation for coded mask infrared imaging
NASA Astrophysics Data System (ADS)
Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi
2014-11-01
The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.
IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.
VANIER,P.E.
2003-08-03
A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.
Efficient block error concealment code for image and video transmission
NASA Astrophysics Data System (ADS)
Min, Jungki; Chan, Andrew K.
1999-05-01
Image and video compression standards such as JPEG, MPEG, H.263 are highly sensitive to error during transmission. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization produces the worst image degradation. Even an error of a single bit in block synchronization may result in data to be placed in wrong positions that is caused by spatial shifts. Our proposed efficient block error concealment code (EBECC) virtually guarantees block synchronization and it improves coding efficiency by several hundred folds over the error resilient entropy code (EREC), proposed by N. G. Kingsbury and D. W. Redmill, depending on the image format and size. In addition, the EBECC produces slightly better resolution on the reconstructed images or video frames than those from the EREC. Another important advantage of the EBECC is that it does not require redundancy contrasting to the EREC that requires 2-3 percent of redundancy. Our preliminary results show the EBECC is 240 times faster than EREC for encoding and 330 times for decoding based on the CIF format of H.263 video coding standard. The EBECC can be used on most of the popular image and video compression schemes such as JPEG, MPEG, and H.263. Additionally, it is especially useful to wireless networks in which the percentage of image and video data is high.
Barker-coded excitation in ophthalmological ultrasound imaging
Zhou, Sheng; Wang, Xiao-Chun; Yang, Jun; Ji, Jian-Jun; Wang, Yan-Qun
2014-01-01
High-frequency ultrasound is an attractive means to obtain fine-resolution images of biological tissues for ophthalmologic imaging. To solve the tradeoff between axial resolution and detection depth, existing in the conventional single-pulse excitation, this study develops a new method which uses 13-bit Barker-coded excitation and a mismatched filter for high-frequency ophthalmologic imaging. A novel imaging platform has been designed after trying out various encoding methods. The simulation and experiment result show that the mismatched filter can achieve a much higher out signal main to side lobe which is 9.7 times of the matched one. The coded excitation method has significant advantages over the single-pulse excitation system in terms of a lower MI, a higher resolution, and a deeper detection depth, which improve the quality of ophthalmic tissue imaging. Therefore, this method has great values in scientific application and medical market. PMID:25356093
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Permala, R.
2017-01-01
LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.
Image coding by way of wavelets
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1993-01-01
The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Coded access optical sensor (CAOS) imager and applications
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2016-04-01
Starting in 2001, we proposed and extensively demonstrated (using a DMD: Digital Micromirror Device) an agile pixel Spatial Light Modulator (SLM)-based optical imager based on single pixel photo-detection (also called a single pixel camera) that is suited for operations with both coherent and incoherent light across broad spectral bands. This imager design operates with the agile pixels programmed in a limited SNR operations starring time-multiplexed mode where acquisition of image irradiance (i.e., intensity) data is done one agile pixel at a time across the SLM plane where the incident image radiation is present. Motivated by modern day advances in RF wireless, optical wired communications and electronic signal processing technologies and using our prior-art SLM-based optical imager design, described using a surprisingly simple approach is a new imager design called Coded Access Optical Sensor (CAOS) that has the ability to alleviate some of the key prior imager fundamental limitations. The agile pixel in the CAOS imager can operate in different time-frequency coding modes like Frequency Division Multiple Access (FDMA), Code-Division Multiple Access (CDMA), and Time Division Multiple Access (TDMA). Data from a first CAOS camera demonstration is described along with novel designs of CAOS-based optical instruments for various applications.
Resolution scalable image coding with reversible cellular automata.
Cappellari, Lorenzo; Milani, Simone; Cruz-Reyes, Carlos; Calvagno, Giancarlo
2011-05-01
In a resolution scalable image coding algorithm, a multiresolution representation of the data is often obtained using a linear filter bank. Reversible cellular automata have been recently proposed as simpler, nonlinear filter banks that produce a similar representation. The original image is decomposed into four subbands, such that one of them retains most of the features of the original image at a reduced scale. In this paper, we discuss the utilization of reversible cellular automata and arithmetic coding for scalable compression of binary and grayscale images. In the binary case, the proposed algorithm that uses simple local rules compares well with the JBIG compression standard, in particular for images where the foreground is made of a simple connected region. For complex images, more efficient local rules based upon the lifting principle have been designed. They provide compression performances very close to or even better than JBIG, depending upon the image characteristics. In the grayscale case, and in particular for smooth images such as depth maps, the proposed algorithm outperforms both the JBIG and the JPEG2000 standards under most coding conditions.
Stadnitski, Tatjana
2012-01-01
WHEN INVESTIGATING FRACTAL PHENOMENA, THE FOLLOWING QUESTIONS ARE FUNDAMENTAL FOR THE APPLIED RESEARCHER: (1) What are essential statistical properties of 1/f noise? (2) Which estimators are available for measuring fractality? (3) Which measurement instruments are appropriate and how are they applied? The purpose of this article is to give clear and comprehensible answers to these questions. First, theoretical characteristics of a fractal pattern (self-similarity, long memory, power law) and the related fractal parameters (the Hurst coefficient, the scaling exponent α, the fractional differencing parameter d of the autoregressive fractionally integrated moving average methodology, the power exponent β of the spectral analysis) are discussed. Then, estimators of fractal parameters from different software packages commonly used by applied researchers (R, SAS, SPSS) are introduced and evaluated. Advantages, disadvantages, and constrains of the popular estimators ([Formula: see text] power spectral density, detrended fluctuation analysis, signal summation conversion) are illustrated by elaborate examples. Finally, crucial steps of fractal analysis (plotting time series data, autocorrelation, and spectral functions; performing stationarity tests; choosing an adequate estimator; estimating fractal parameters; distinguishing fractal processes from short-memory patterns) are demonstrated with empirical time series.
Stadnitski, Tatjana
2012-01-01
When investigating fractal phenomena, the following questions are fundamental for the applied researcher: (1) What are essential statistical properties of 1/f noise? (2) Which estimators are available for measuring fractality? (3) Which measurement instruments are appropriate and how are they applied? The purpose of this article is to give clear and comprehensible answers to these questions. First, theoretical characteristics of a fractal pattern (self-similarity, long memory, power law) and the related fractal parameters (the Hurst coefficient, the scaling exponent α, the fractional differencing parameter d of the autoregressive fractionally integrated moving average methodology, the power exponent β of the spectral analysis) are discussed. Then, estimators of fractal parameters from different software packages commonly used by applied researchers (R, SAS, SPSS) are introduced and evaluated. Advantages, disadvantages, and constrains of the popular estimators (d^ML, power spectral density, detrended fluctuation analysis, signal summation conversion) are illustrated by elaborate examples. Finally, crucial steps of fractal analysis (plotting time series data, autocorrelation, and spectral functions; performing stationarity tests; choosing an adequate estimator; estimating fractal parameters; distinguishing fractal processes from short-memory patterns) are demonstrated with empirical time series. PMID:22586408
ERIC Educational Resources Information Center
Gray, Shirley B.
1992-01-01
This article traces the historical development of fractal geometry from early in the twentieth century and offers an explanation of the mathematics behind the recursion formulas and their representations within computer graphics. Also included are the fundamentals behind programing for fractal graphics in the C Language with appropriate…
ERIC Educational Resources Information Center
Dewdney, A. K.
1991-01-01
Explores the subject of fractal geometry focusing on the occurrence of fractal-like shapes in the natural world. Topics include iterated functions, chaos theory, the Lorenz attractor, logistic maps, the Mandelbrot set, and mini-Mandelbrot sets. Provides appropriate computer algorithms, as well as further sources of information. (JJK)
Matsueda, Hiroaki; Lee, Ching Hua
2015-03-10
We examine singular value spectrum of a class of two-dimensional fractal images. We find that the spectra can be mapped onto entanglement spectra of free fermions in one dimension. This exact mapping tells us that the singular value decomposition is a way of detecting a holographic relation between classical and quantum systems.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Pyramidal Image-Processing Code For Hexagonal Grid
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.
1990-01-01
Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.
New coding concept for fast ultrasound imaging using pulse trains
NASA Astrophysics Data System (ADS)
Misaridis, Thanasis; Jensen, Joergen A.
2002-04-01
Frame rate in ultrasound imaging can be increased by simultaneous transmission of multiple beams using coded waveforms. However, the achievable degree of orthogonality among coded waveforms is limited in ultrasound, and the image quality degrades unacceptably due to interbeam interference. In this paper, an alternative combined time-space coding approach is undertaken. In the new method all transducer elements are excited with short pulses and the high time-bandwidth (TB) product waveforms are generated acoustically. Each element transmits a short pulse spherical wave with a constant transmit delay from element to element, long enough to assure no pulse overlapping for all depths in the image. Frequency shift keying is used for per element coding. The received signals from a point scatterer are staggered pulse trains which are beamformed for all beam directions and further processed with a bank of matched filters (one for each beam direction). Filtering compresses the pulse train to a single pulse at the scatterer position with a number of spike axial sidelobes. Cancellation of the ambiguity spikes is done by applying additional phase modulation from one emission to the next and summing every two successive images. Simulation results presented for QLFM and Costas spatial encoding schemes show that the proposed method can yield images with range sidelobes down to -45 dB using only two emissions.
Fractal analysis for assessing tumour grade in microscopic images of breast tissue
NASA Astrophysics Data System (ADS)
Tambasco, Mauro; Costello, Meghan; Newcomb, Chris; Magliocco, Anthony M.
2007-03-01
In 2006, breast cancer is expected to continue as the leading form of cancer diagnosed in women, and the second leading cause of cancer mortality in this group. A method that has proven useful for guiding the choice of treatment strategy is the assessment of histological tumor grade. The grading is based upon the mitosis count, nuclear pleomorphism, and tubular formation, and is known to be subject to inter-observer variability. Since cancer grade is one of the most significant predictors of prognosis, errors in grading can affect patient management and outcome. Hence, there is a need to develop a breast cancer-grading tool that is minimally operator dependent to reduce variability associated with the current grading system, and thereby reduce uncertainty that may impact patient outcome. In this work, we explored the potential of a computer-based approach using fractal analysis as a quantitative measure of cancer grade for breast specimens. More specifically, we developed and optimized computational tools to compute the fractal dimension of low- versus high-grade breast sections and found them to be significantly different, 1.3+/-0.10 versus 1.49+/-0.10, respectively (Kolmogorov-Smirnov test, p<0.001). These results indicate that fractal dimension (a measure of morphologic complexity) may be a useful tool for demarcating low- versus high-grade cancer specimens, and has potential as an objective measure of breast cancer grade. Such prognostic value could provide more sensitive and specific information that would reduce inter-observer variability by aiding the pathologist in grading cancers.
Medical image registration using sparse coding of image patches.
Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid
2016-06-01
Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing.
Li, Heheng; Luo, Liangping; Huang, Li
2011-02-01
The present paper is aimed to study the fractal spectrum of the cerebral computerized tomography in 158 normal infants of different age groups, based on the calculation of chaotic theory. The distribution range of neonatal period was 1.88-1.90 (mean = 1.8913 +/- 0.0064); It reached a stable condition at the level of 1.89-1.90 during 1-12 months old (mean = 1.8927 +/- 0.0045); The normal range of 1-2 years old infants was 1.86-1.90 (mean = 1.8863 +/- 4 0.0085); It kept the invariance of the quantitative value among 1.88-1.91(mean = 1.8958 +/- 0.0083) during 2-3 years of age. ANOVA indicated there's no significant difference between boys and girls (F = 0.243, P > 0.05), but the difference of age groups was significant (F = 8.947, P < 0.001). The fractal dimension of cerebral computerized tomography in normal infants computed by box methods was maintained at an efficient stability from 1.86 to 1.91. It indicated that there exit some attractor modes in pediatric brain development.
Digital Image Analysis for DETCHIP(®) Code Determination.
Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby
2012-08-01
DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.
Adaptive directional lifting-based wavelet transform for image coding.
Ding, Wenpeng; Wu, Feng; Wu, Xiaolin; Li, Shipeng; Li, Houqiang
2007-02-01
We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.
Sparse representation-based image restoration via nonlocal supervised coding
NASA Astrophysics Data System (ADS)
Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng
2016-10-01
Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.
Image coding based on energy-sorted wavelet packets
NASA Astrophysics Data System (ADS)
Kong, Lin-Wen; Lay, Kuen-Tsair
1995-04-01
The discrete wavelet transform performs multiresolution analysis, which effectively decomposes a digital image into components with different degrees of details. In practice, it is usually implemented in the form of filter banks. If the filter banks are cascaded and both the low-pass and the high-pass components are further decomposed, a wavelet packet is obtained. The coefficients of the wavelet packet effectively represent subimages in different resolution levels. In the energy-sorted wavelet- packet decomposition, all subimages in the packet are then sorted according to their energies. The most important subimages, as measured by the energy, are preserved and coded. By investigating the histogram of each subimage, it is found that the pixel values are well modelled by the Laplacian distribution. Therefore, the Laplacian quantization is applied to quantized the subimages. Experimental results show that the image coding scheme based on wavelet packets achieves high compression ratio while preserving satisfactory image quality.
Study on scalable coding algorithm for medical image.
Hongxin, Chen; Zhengguang, Liu; Hongwei, Zhang
2005-01-01
According to the characteristics of medical image and wavelet transform, a scalable coding algorithm is presented, which can be used in image transmission by network. Wavelet transform makes up for the weakness of DCT transform and it is similar to the human visual system. The second generation of wavelet transform, the lifting scheme, can be completed by integer form, which is divided into several steps, and they can be realized by calculation form integer to integer. Lifting scheme can simplify the computing process and increase transform precision. According to the property of wavelet sub-bands, wavelet coefficients are organized on the basis of the sequence of their importance, so code stream is formed progressively and it is scalable in resolution. Experimental results show that the algorithm can be used effectively in medical image compression and suitable to long-distance browse.
A novel fractal monocular and stereo video codec with object-based functionality
NASA Astrophysics Data System (ADS)
Zhu, Shiping; Li, Liyun; Wang, Zaikuo
2012-12-01
Based on the classical fractal video compression method, an improved monocular fractal compression method is proposed which includes using more effective macroblock partition scheme instead of classical quadtree partition scheme; using improved fast motion estimation to increase the calculation speed; using homo-I-frame like in H.264, etc. The monocular codec uses the motion compensated prediction (MCP) structure. And stereo fractal video coding is proposed which matches the macroblock with two reference frames in left and right views, and it results in increasing compression ratio and reducing bit rate/bandwidth when transmitting compressed video data. The stereo codec combines MCP and disparity compensated prediction. And a new method of object-based fractal video coding is proposed in which each object can be encoded and decoded independently with higher compression ratio and speed and less bit rate/bandwidth when transmitting compressed stereo video data greatly. Experimental results indicate that the proposed monocular method can raise compression ratio 3.6 to 7.5 times, speed up compression time 5.3 to 22.3 times, and improve the image quality 3.81 to 9.24 dB in comparison with circular prediction mapping and non-contractive interframe mapping. The PSNR of the proposed stereo video coding is about 0.17 dB higher than that of the proposed monocular video coding, and 0.69 dB higher than that of JMVC 4.0 on average. Comparing with the bit rate resulted by the proposed monocular video coding and JMVC 4.0, the proposed stereo video coding achieves, on average, 2.53 and 21.14 Kbps bit rate saving, respectively. The proposed object-based fractal monocular and stereo video coding methods are simple and effective, and they make the applications of fractal monocular and stereo video coding more flexible and practicable.
NASA Technical Reports Server (NTRS)
Jaggi, S.; Quattrochi, Dale A.; Lam, Nina S.-N.
1993-01-01
Fractal geometry is increasingly becoming a useful tool for modeling natural phenomena. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. The purpose of this paper is to introduce and implement three algorithms in C code for deriving fractal measurement from remotely sensed data. These three methods are: the line-divider method, the variogram method, and the triangular prism method. Remote-sensing data acquired by NASA's Calibrated Airborne Multispectral Scanner (CAMS) are used to compute the fractal dimension using each of the three methods. These data were obtained as a 30 m pixel spatial resolution over a portion of western Puerto Rico in January 1990. A description of the three methods, their implementation in PC-compatible environment, and some results of applying these algorithms to remotely sensed image data are presented.
Comparative noise performance of a coded aperture spectral imager
NASA Astrophysics Data System (ADS)
Piper, Jonathan; Yuen, Peter; Godfree, Peter; Ding, Mengjia; Soori, Umair; Selvagumar, Senthurran; James, David
2016-10-01
Novel types of spectral sensors using coded apertures may offer various advantages over conventional designs, especially the possibility of compressive measurements that could exceed the expected spatial, temporal or spectral resolution of the system. However, the nature of the measurement process imposes certain limitations, especially on the noise performance of the sensor. This paper considers a particular type of coded-aperture spectral imager and uses analytical and numerical modelling to compare its expected noise performance with conventional hyperspectral sensors. It is shown that conventional sensors may have an advantage in conditions where signal levels are high, such as bright light or slow scanning, but that coded-aperture sensors may be advantageous in low-signal conditions.
Wavelet-based zerotree coding of aerospace images
NASA Astrophysics Data System (ADS)
Franques, Victoria T.; Jain, Vijay K.
1996-06-01
This paper presents a wavelet based image coding method achieving high levels of compression. A multi-resolution subband decomposition system is constructed using Quadrature Mirror Filters. Symmetric extension and windowing of the multi-scaled subbands are incorporated to minimize the boundary effects. Next, the Embedded Zerotree Wavelet coding algorithm is used for data compression method. Elimination of the isolated zero symbol, for certain subbands, leads to an improved EZW algorithm. Further compression is obtained with an adaptive arithmetic coder. We achieve a PSNR of 26.91 dB at a bit rate of 0.018, 35.59 dB at a bit rate of 0.149, and 43.05 dB at 0.892 bits/pixel for the aerospace image, Refuel.
Improved zerotree coding algorithm for wavelet image compression
NASA Astrophysics Data System (ADS)
Chen, Jun; Li, Yunsong; Wu, Chengke
2000-12-01
A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.
Peak transform for efficient image representation and coding.
He, Zhihai
2007-07-01
In this work, we introduce a nonlinear geometric transform, called peak transform (PT), for efficient image representation and coding. The proposed PT is able to convert high-frequency signals into low-frequency ones, making them much easier to be compressed. Coupled with wavelet transform and subband decomposition, the PT is able to significantly reduce signal energy in high-frequency subbands and achieve a significant transform coding gain. This has important applications in efficient data representation and compression. To maximize the transform coding gain, we develop a dynamic programming solution for optimum PT design. Based on PT, we design an image encoder, called the PT encoder, for efficient image compression. Our extensive experimental results demonstrate that, in wavelet-based subband decomposition, the signal energy in high-frequency subbands can be reduced by up to 60% if a PT is applied. The PT image encoder outperforms state-of-the-art JPEG2000 and H.264 (INTRA) encoders by up to 2-3 dB in peak signal-to-noise ratio (PSNR), especially for images with a significant amount of high-frequency components. Our experimental results also show that the proposed PT is able to efficiently capture and preserve high-frequency image features (e.g., edges) and yields significantly improved visual quality. We believe that the concept explored in this work, designing a nonlinear transform to convert hard-to-compress signals into easy ones, is very useful. We hope this work would motivate more research work along this direction.
Waliszewski, Przemyslaw
2016-01-01
The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken's embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed
Waliszewski, Przemyslaw
2016-01-01
The subjective evaluation of tumor aggressiveness is a cornerstone of the contemporary tumor pathology. A large intra- and interobserver variability is a known limiting factor of this approach. This fundamental weakness influences the statistical deterministic models of progression risk assessment. It is unlikely that the recent modification of tumor grading according to Gleason criteria for prostate carcinoma will cause a qualitative change and improve significantly the accuracy. The Gleason system does not allow the identification of low aggressive carcinomas by some precise criteria. The ontological dichotomy implies the application of an objective, quantitative approach for the evaluation of tumor aggressiveness as an alternative. That novel approach must be developed and validated in a manner that is independent of the results of any subjective evaluation. For example, computer-aided image analysis can provide information about geometry of the spatial distribution of cancer cell nuclei. A series of the interrelated complexity measures characterizes unequivocally the complex tumor images. Using those measures, carcinomas can be classified into the classes of equivalence and compared with each other. Furthermore, those measures define the quantitative criteria for the identification of low- and high-aggressive prostate carcinomas, the information that the subjective approach is not able to provide. The co-application of those complexity measures in cluster analysis leads to the conclusion that either the subjective or objective classification of tumor aggressiveness for prostate carcinomas should comprise maximal three grades (or classes). Finally, this set of the global fractal dimensions enables a look into dynamics of the underlying cellular system of interacting cells and the reconstruction of the temporal-spatial attractor based on the Taken’s embedding theorem. Both computer-aided image analysis and the subsequent fractal synthesis could be performed
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
157km BOTDA with pulse coding and image processing
NASA Astrophysics Data System (ADS)
Qian, Xianyang; Wang, Zinan; Wang, Song; Xue, Naitian; Sun, Wei; Zhang, Li; Zhang, Bin; Rao, Yunjiang
2016-05-01
A repeater-less Brillouin optical time-domain analyzer (BOTDA) with 157.68km sensing range is demonstrated, using the combination of random fiber laser Raman pumping and low-noise laser-diode-Raman pumping. With optical pulse coding (OPC) and Non Local Means (NLM) image processing, temperature sensing with +/-0.70°C uncertainty and 8m spatial resolution is experimentally demonstrated. The image processing approach has been proved to be compatible with OPC, and it further increases the figure-of-merit (FoM) of the system by 57%.
Refined codebook for grayscale image coding based on vector quantization
NASA Astrophysics Data System (ADS)
Hu, Yu-Chen; Chen, Wu-Lin; Tsai, Pi-Yu
2015-07-01
Vector quantization (VQ) is a commonly used technique for image compression. Typically, the common codebooks (CCBs) that are designed by using multiple training images are used in VQ. The CCBs are stored in the public websites such that their storage cost can be omitted. In addition to the CCBs, the private codebooks (PCBs) that are designed by using the image to be compressed can be used in VQ. However, calculating the bit rates (BRs) of VQ includes the storage cost of the PCBs. It is observed that some codewords in the CCB are not used in VQ. The codebook refinement process is designed to generate the refined codebook (RCB) based on the CCB of each image. To cut down the BRs, the lossless index coding process and the two-stage lossless coding process are employed to encode the index table and the RCB, respectively. Experimental results reveal that the proposed scheme (PS) achieves better image qualities than VQ with the CCBs. In addition, the PS requires less BRs than VQ with the PCBs.
Superharmonic imaging with chirp coded excitation: filtering spectrally overlapped harmonics.
Harput, Sevan; McLaughlan, James; Cowell, David M J; Freear, Steven
2014-11-01
Superharmonic imaging improves the spatial resolution by using the higher order harmonics generated in tissue. The superharmonic component is formed by combining the third, fourth, and fifth harmonics, which have low energy content and therefore poor SNR. This study uses coded excitation to increase the excitation energy. The SNR improvement is achieved on the receiver side by performing pulse compression with harmonic matched filters. The use of coded signals also introduces new filtering capabilities that are not possible with pulsed excitation. This is especially important when using wideband signals. For narrowband signals, the spectral boundaries of the harmonics are clearly separated and thus easy to filter; however, the available imaging bandwidth is underused. Wideband excitation is preferable for harmonic imaging applications to preserve axial resolution, but it generates spectrally overlapping harmonics that are not possible to filter in time and frequency domains. After pulse compression, this overlap increases the range side lobes, which appear as imaging artifacts and reduce the Bmode image quality. In this study, the isolation of higher order harmonics was achieved in another domain by using the fan chirp transform (FChT). To show the effect of excitation bandwidth in superharmonic imaging, measurements were performed by using linear frequency modulated chirp excitation with varying bandwidths of 10% to 50%. Superharmonic imaging was performed on a wire phantom using a wideband chirp excitation. Results were presented with and without applying the FChT filtering technique by comparing the spatial resolution and side lobe levels. Wideband excitation signals achieved a better resolution as expected, however range side lobes as high as -23 dB were observed for the superharmonic component of chirp excitation with 50% fractional bandwidth. The proposed filtering technique achieved >50 dB range side lobe suppression and improved the image quality without
Probability Distribution Estimation for Autoregressive Pixel-Predictive Image Coding.
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2016-03-01
Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model. The analysis of image stationarity properties further allows deriving a novel formula for weight computation in weighted least-squares proofing and generalizing ad hoc equations from the literature. For sparse intensity distributions in non-natural images, a modified image model is presented. Evaluations were done in the newly developed C++ framework volumetric, artificial, and natural image lossless coder (Vanilc), which can compress a wide range of images, including 16-bit medical 3D volumes or multichannel data. A comparison with several of the best available lossless image codecs proofs that the method can achieve very competitive compression ratios. In terms of reproducible research, the source code of Vanilc has been made public.
Lung cancer-a fractal viewpoint.
Lennon, Frances E; Cianci, Gianguido C; Cipriani, Nicole A; Hensing, Thomas A; Zhang, Hannah J; Chen, Chin-Tu; Murgu, Septimiu D; Vokes, Everett E; Vannier, Michael W; Salgia, Ravi
2015-11-01
Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed.
Lung cancer—a fractal viewpoint
Lennon, Frances E.; Cianci, Gianguido C.; Cipriani, Nicole A.; Hensing, Thomas A.; Zhang, Hannah J.; Chen, Chin-Tu; Murgu, Septimiu D.; Vokes, Everett E.; W. Vannier, Michael; Salgia, Ravi
2016-01-01
Fractals are mathematical constructs that show self-similarity over a range of scales and non-integer (fractal) dimensions. Owing to these properties, fractal geometry can be used to efficiently estimate the geometrical complexity, and the irregularity of shapes and patterns observed in lung tumour growth (over space or time), whereas the use of traditional Euclidean geometry in such calculations is more challenging. The application of fractal analysis in biomedical imaging and time series has shown considerable promise for measuring processes as varied as heart and respiratory rates, neuronal cell characterization, and vascular development. Despite the advantages of fractal mathematics and numerous studies demonstrating its applicability to lung cancer research, many researchers and clinicians remain unaware of its potential. Therefore, this Review aims to introduce the fundamental basis of fractals and to illustrate how analysis of fractal dimension (FD) and associated measurements, such as lacunarity (texture) can be performed. We describe the fractal nature of the lung and explain why this organ is particularly suited to fractal analysis. Studies that have used fractal analyses to quantify changes in nuclear and chromatin FD in primary and metastatic tumour cells, and clinical imaging studies that correlated changes in the FD of tumours on CT and/or PET images with tumour growth and treatment responses are reviewed. Moreover, the potential use of these techniques in the diagnosis and therapeutic management of lung cancer are discussed. PMID:26169924
Color-coded visualization of magnetic resonance imaging multiparametric maps
NASA Astrophysics Data System (ADS)
Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit
2017-01-01
Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.
Coded-aperture Raman imaging for standoff explosive detection
NASA Astrophysics Data System (ADS)
McCain, Scott T.; Guenther, B. D.; Brady, David J.; Krishnamurthy, Kalyani; Willett, Rebecca
2012-06-01
This paper describes the design of a deep-UV Raman imaging spectrometer operating with an excitation wavelength of 228 nm. The designed system will provide the ability to detect explosives (both traditional military explosives and home-made explosives) from standoff distances of 1-10 meters with an interrogation area of 1 mm x 1 mm to 200 mm x 200 mm. This excitation wavelength provides resonant enhancement of many common explosives, no background fluorescence, and an enhanced cross-section due to the inverse wavelength scaling of Raman scattering. A coded-aperture spectrograph combined with compressive imaging algorithms will allow for wide-area interrogation with fast acquisition rates. Coded-aperture spectral imaging exploits the compressibility of hyperspectral data-cubes to greatly reduce the amount of acquired data needed to interrogate an area. The resultant systems are able to cover wider areas much faster than traditional push-broom and tunable filter systems. The full system design will be presented along with initial data from the instrument. Estimates for area scanning rates and chemical sensitivity will be presented. The system components include a solid-state deep-UV laser operating at 228 nm, a spectrograph consisting of well-corrected refractive imaging optics and a reflective grating, an intensified solar-blind CCD camera, and a high-efficiency collection optic.
Color-coded visualization of magnetic resonance imaging multiparametric maps
Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit
2017-01-01
Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data. PMID:28112222
Investigation into How 8th Grade Students Define Fractals
ERIC Educational Resources Information Center
Karakus, Fatih
2015-01-01
The analysis of 8th grade students' concept definitions and concept images can provide information about their mental schema of fractals. There is limited research on students' understanding and definitions of fractals. Therefore, this study aimed to investigate the elementary students' definitions of fractals based on concept image and concept…
JND measurements and wavelet-based image coding
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Yan, Loon-Shan
1998-06-01
Two major issues in image coding are the effective incorporation of human visual system (HVS) properties and the effective objective measure for evaluating image quality (OQM). In this paper, we treat the two issues in an integrated fashion. We build a JND model based on the measurements of the JND (Just Noticeable Difference) property of HVS. We found that JND does not only depend on the background intensity but also a function of both spatial frequency and patten direction. Wavelet transform, due to its excellent simultaneous Time (space)/frequency resolution, is the best choice to apply the JND model. We mathematically derive an OQM called JND_PSNR that is based on the JND property and wavelet decomposed subbands. JND_PSNR is more consistent with human perception and is recommended as an alternative to the PSNR or SNR. With the JND_PSNR in mind, we proceed to propose a wavelet and JND based codec called JZW. JZW quantizes coefficients in each subband with proper step size according to the subband's importance to human perception. Many characteristics of JZW are discussed, its performance evaluated and compared with other famous algorithms such as EZW, SPIHT and TCCVQ. Our algorithm has 1 - 1.5 dB gain over SPIHT even when we use simple Huffman coding rather than the more efficient adaptive arithmetic coding.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
Applications of fractals in ecology.
Sugihara, G; M May, R
1990-03-01
Fractal models describe the geometry of a wide variety of natural objects such as coastlines, island chains, coral reefs, satellite ocean-color images and patches of vegetation. Cast in the form of modified diffusion models, they can mimic natural and artificial landscapes having different types of complexity of shape. This article provides a brief introduction to fractals and reports on how they can be used by ecologists to answer a variety of basic questions, about scale, measurement and hierarchy in, ecological systems.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, Vinay A.; Farvardin, Nariman
1990-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimiaation of a scheme based on the 2-D block cosine transorm when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noise channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Construction of fractal nanostructures based on Kepler-Shubnikov nets
Ivanov, V. V. Talanov, V. M.
2013-05-15
A system of information codes for deterministic fractal lattices and sets of multifractal curves is proposed. An iterative modular design was used to obtain a series of deterministic fractal lattices with generators in the form of fragments of 2D structures and a series of multifractal curves (based on some Kepler-Shubnikov nets) having Cantor set properties. The main characteristics of fractal structures and their lacunar spectra are determined. A hierarchical principle is formulated for modules of regular fractal structures.
Significance-linked connected component analysis for wavelet image coding.
Chai, B B; Vass, J; Zhuang, X
1999-01-01
Recent success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.
Fractal structure of asphaltene aggregates.
Rahmani, Nazmul H G; Dabros, Tadeusz; Masliyah, Jacob H
2005-05-15
A photographic technique coupled with image analysis was used to measure the size and fractal dimension of asphaltene aggregates formed in toluene-heptane solvent mixtures. First, asphaltene aggregates were examined in a Couette device and the fractal-like aggregate structures were quantified using boundary fractal dimension. The evolution of the floc structure with time was monitored. The relative rates of shear-induced aggregation and fragmentation/restructuring determine the steady-state floc structure. The average floc structure became more compact or more organized as the floc size distribution attained steady state. Moreover, the higher the shear rate is, the more compact the floc structure is at steady state. Second, the fractal dimensions of asphaltene aggregates were also determined in a free-settling test. The experimentally determined terminal settling velocities and characteristic lengths of the aggregates were utilized to estimate the 2D and 3D fractal dimensions. The size-density fractal dimension (D(3)) of the asphaltene aggregates was estimated to be in the range from 1.06 to 1.41. This relatively low fractal dimension suggests that the asphaltene aggregates are highly porous and very tenuous. The aggregates have a structure with extremely low space-filling capacity.
HD Photo: a new image coding technology for digital photography
NASA Astrophysics Data System (ADS)
Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.
2007-09-01
This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.
Block-based embedded color image and video coding
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Pearlman, William A.; Islam, Asad
2004-01-01
Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.
Coded-aperture Compton camera for gamma-ray imaging
NASA Astrophysics Data System (ADS)
Farber, Aaron M.
This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.
NASA Astrophysics Data System (ADS)
Aushev, A. A.; Barinov, S. P.; Vasin, M. G.; Drozdov, Yu M.; Ignat'ev, Yu V.; Izgorodin, V. M.; Kovshov, D. K.; Lakhtikov, A. E.; Lukovkina, D. D.; Markelov, V. V.; Morovov, A. P.; Shishlov, V. V.
2015-06-01
We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied by alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples.
Aushev, A A; Barinov, S P; Vasin, M G; Drozdov, Yu M; Ignat'ev, Yu V; Izgorodin, V M; Kovshov, D K; Lakhtikov, A E; Lukovkina, D D; Markelov, V V; Morovov, A P; Shishlov, V V
2015-06-30
We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied by alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples. (laser plasma)
Recursive time-varying filter banks for subband image coding
NASA Technical Reports Server (NTRS)
Smith, Mark J. T.; Chung, Wilson C.
1992-01-01
Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.
Coded aperture imaging with self-supporting uniformly redundant arrays
Fenimore, Edward E.
1983-01-01
A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.
A Brief Historical Introduction to Fractals and Fractal Geometry
ERIC Educational Resources Information Center
Debnath, Lokenath
2006-01-01
This paper deals with a brief historical introduction to fractals, fractal dimension and fractal geometry. Many fractals including the Cantor fractal, the Koch fractal, the Minkowski fractal, the Mandelbrot and Given fractal are described to illustrate self-similar geometrical figures. This is followed by the discovery of dynamical systems and…
Pippa, Natassa; Pispas, Stergios; Demetzos, Costas
2014-09-01
The major advance of mixed liposomes (the so-called chimeric systems) is to control the size, structure, and morphology of these nanoassemblies, and therefore, system colloidal properties, with the aid of a large variety of parameters, such as chemical architecture and composition. The goal of this study is to investigate the alterations of the physicochemical and morphological characteristics of chimeric dipalmitoylphosphatidylcholine (DPPC) liposomes, caused by the incorporation of block and gradient copolymers (different macromolecular architecture) with different chemical compositions (different amounts of hydrophobic component). Light scattering techniques were utilized in order to characterize physicochemically and to delineate the fractal morphology of chimeric liposomes. In this study, we also investigated the structural differences between the prepared chimeric liposomes as are visualized by scanning electron microscopy (SEM). It could be concluded that all the chimeric liposomes have regular structure, as SEM images revealed, while their fractal dimensionality was found to be dependent on the macromolecular architecture of the polymeric guest.
NASA Astrophysics Data System (ADS)
Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Garazdiuk, M.; Motrich, A. V.
2014-08-01
This research presents the results of investigation of laser polarization fluorescence of biological layers (histological sections of the myocardium). The polarized structure of autofluorescence imaging layers of biological tissues was detected and investigated. Proposed the model of describing the formation of polarization inhomogeneous of autofluorescence imaging biological optically anisotropic layers. On this basis, analytically and experimentally tested to justify the method of laser polarimetry autofluorescent. Analyzed the effectiveness of this method in the postmortem diagnosis of infarction. The objective criteria (statistical moments) of differentiation of autofluorescent images of histological sections myocardium were defined. The operational characteristics (sensitivity, specificity, accuracy) of these technique were determined.
Sparse coding based feature representation method for remote sensing images
NASA Astrophysics Data System (ADS)
Oguslu, Ender
In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further
Phase transfer function based method to alleviate image artifacts in wavefront coding imaging system
NASA Astrophysics Data System (ADS)
Mo, Xutao; Wang, Jinjiang
2013-09-01
Wavefront coding technique can extend the depth of filed (DOF) of the incoherent imaging system. Several rectangular separable phase masks (such as cubic type, exponential type, logarithmic type, sinusoidal type, rational type, et al) have been proposed and discussed, because they can extend the DOF up to ten times of the DOF of ordinary imaging system. But according to the research on them, researchers have pointed out that the images are damaged by the artifacts, which usually come from the non-linear phase transfer function (PTF) differences between the PTF used in the image restoration filter and the PTF related to real imaging condition. In order to alleviate the image artifacts in imaging systems with wavefront coding, an optimization model based on the PTF was proposed to make the PTF invariance with the defocus. Thereafter, an image restoration filter based on the average PTF in the designed depth of field was introduced along with the PTF-based optimization. The combination of the optimization and the image restoration proposed can alleviate the artifacts, which was confirmed by the imaging simulation of spoke target. The cubic phase mask (CPM) and exponential phase mask (EPM) were discussed as example.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Zong, Caihui; Wei, Jingxuan; Xie, Xiaopeng
2016-10-01
Wave-front coding, proposed by Dowski and Cathey in 1995, is widely known to be capable of extending the depth of focus (DOF) of incoherent imaging systems. However, benefiting from its very large point spread function (PSF) generated by a suitably designed phase mask that is added to the aperture plane, wave-front coding could also be used to achieve super-resolution without replacing the current sensor with one of smaller pitch size. An image amplification based super-resolution reconstruction procedure has been specifically designed for wave-front coded imaging systems and its effectiveness has been tested by experiment. For instance, for a focal length of 50 mm and f-number 4.5, objects within the range [5 m, ∞] are clearly imaged with the help of wave-front coding, which indicates a DOF extension ratio of approximately 20. The proposed super-resolution reconstruction procedure produces at least 3× resolution improvement, with the quality of the reconstructed super-resolution image approaching the diffraction limit.
NASA Astrophysics Data System (ADS)
Wuorinen, Charles
2015-03-01
Any of the arts may produce exemplars that have fractal characteristics. There may be fractal painting, fractal poetry, and the like. But these will always be specific instances, not necessarily displaying intrinsic properties of the art-medium itself. Only music, I believe, of all the arts possesses an intrinsically fractal character, so that its very nature is fractally determined. Thus, it is reasonable to assert that any instance of music is fractal...
Fractal analysis of complex microstructure in castings
Lu, S.Z.; Lipp, D.C.; Hellawell, A.
1995-12-31
Complex microstructures in castings are usually characterized descriptively which often raises ambiguity and makes it difficult to relate the microstructure to the growth kinetics or mechanical properties in processing modeling. Combining the principle of fractal geometry and computer image processing techniques, it is feasible to characterize the complex microstructures numerically by the parameters of fractal dimension, D, and shape factor, a, without ambiguity. Procedures of fractal measurement and analysis are described, and a test case of its application to cast irons is provided. The results show that the irregular cast structures may all be characterized numerically by fractal analysis.
Fractal signatures in the aperiodic Fibonacci grating.
Verma, Rupesh; Banerjee, Varsha; Senthilkumaran, Paramasivam
2014-05-01
The Fibonacci grating (FbG) is an archetypal example of aperiodicity and self-similarity. While aperiodicity distinguishes it from a fractal, self-similarity identifies it with a fractal. Our paper investigates the outcome of these complementary features on the FbG diffraction profile (FbGDP). We find that the FbGDP has unique characteristics (e.g., no reduction in intensity with increasing generations), in addition to fractal signatures (e.g., a non-integer fractal dimension). These make the Fibonacci architecture potentially useful in image forming devices and other emerging technologies.
Biometric iris image acquisition system with wavefront coding technology
NASA Astrophysics Data System (ADS)
Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao
2013-09-01
Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code
Microbialites on Mars: a fractal analysis of the Athena's microscopic images
NASA Astrophysics Data System (ADS)
Bianciardi, G.; Rizzo, V.; Cantasano, N.
2015-10-01
The Mars Exploration Rovers investigated Martian plains where laminated sedimentary rocks are present. The Athena morphological investigation [1] showed microstructures organized in intertwined filaments of microspherules: a texture we have also found on samples of terrestrial (biogenic) stromatolites and other microbialites and not on pseudo-abiogenicstromatolites. We performed a quantitative image analysis in order to compare 50 microbialites images with 50 rovers (Opportunity and Spirit) ones (approximately 30,000/30,000 microstructures). Contours were extracted and morphometric indexes obtained: geometric and algorithmic complexities, entropy, tortuosity, minimum and maximum diameters. Terrestrial and Martian textures resulted multifractals. Mean values and confidence intervals from the Martian images overlapped perfectly with those from terrestrial samples. The probability of this occurring by chance was less than 1/28, p<0.004. Our work show the evidence of a widespread presence of microbialites in the Martian outcroppings: i.e., the presence of unicellular life on the ancient Mars, when without any doubt, liquid water flowed on the Red Planet.
Fractal Characterization of Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Qiu, Hon-Iie; Lam, Nina Siu-Ngan; Quattrochi, Dale A.; Gamon, John A.
1999-01-01
Two Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral images selected from the Los Angeles area, one representing urban and the other, rural, were used to examine their spatial complexity across their entire spectrum of the remote sensing data. Using the ICAMS (Image Characterization And Modeling System) software, we computed the fractal dimension values via the isarithm and triangular prism methods for all 224 bands in the two AVIRIS scenes. The resultant fractal dimensions reflect changes in image complexity across the spectral range of the hyperspectral images. Both the isarithm and triangular prism methods detect unusually high D values on the spectral bands that fall within the atmospheric absorption and scattering zones where signature to noise ratios are low. Fractal dimensions for the urban area resulted in higher values than for the rural landscape, and the differences between the resulting D values are more distinct in the visible bands. The triangular prism method is sensitive to a few random speckles in the images, leading to a lower dimensionality. On the contrary, the isarithm method will ignore the speckles and focus on the major variation dominating the surface, thus resulting in a higher dimension. It is seen where the fractal curves plotted for the entire bandwidth range of the hyperspectral images could be used to distinguish landscape types as well as for screening noisy bands.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
The application of coded excitation technology in medical ultrasonic Doppler imaging
NASA Astrophysics Data System (ADS)
Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin
2008-03-01
Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.
Texture analysis of diagnostic x-ray images by use of fractals
NASA Astrophysics Data System (ADS)
Lundahl, T.; Ohley, W. J.; Kay, S. M.; White, H.; Williams, D. O.; Most, A. S.
1986-11-01
In this work a discrete fractional Brownian motion (FBM) model is applied to xray images as a measure of the regional texture. FBM is a generalization of ordinary Wiener-Levy Brownian motion. A Parameter H is introduced which describes the roughness of the realizations. Using generated realizations, a Cramer-Rao bound for the variance of an estimate of Hwas evaluated using asymptotic statistics. The results show that the accuracy of the estimate is independent of the true H. A maximum likelihood estimator is derived for H and applied to the data sets The results were close to the C-R bound. The MLE is then applied to sequences of digital coronary angiograms. The results show that the H parameter is a useful index by which to segment vessels from background noise.
The two-dimensional code image recognition based on wavelet transform
NASA Astrophysics Data System (ADS)
Wan, Hao; Peng, Cheng
2017-01-01
With the development of information technology, two-dimensional code is more and more widely used. In the technology of two-dimensional code recognition, the noise reduction of the two-dimensional code image is very important. Wavelet transform is applied to the noise reduction of two-dimensional code, and the corresponding Matlab experiment and simulation are made. The results show that the wavelet transform is simple and fast in the noise reduction of two-dimensional code. And it can commendably protect the details of the two-dimensional code image.
NASA Astrophysics Data System (ADS)
Tozian, Cynthia
A coded aperture1 plate was employed on a conventional gamma camera for 3D single photon emission computed tomography (SPECT) imaging on small animal models. The coded aperture design was selected to improve the spatial resolution and decrease the minimum detectable activity (MDA) required to image plaque formation in the APoE (apolipoprotein E) gene deficient mouse model when compared to conventional SPECT techniques. The pattern that was tested was a no-two-holes-touching (NTHT) modified uniformly redundant array (MURA) having 1,920 pinholes. The number of pinholes combined with the thin sintered tungsten plate was designed to increase the efficiency of the imaging modality over conventional gamma camera imaging methods while improving spatial resolution and reducing noise in the image reconstruction. The MDA required to image the vulnerable plaque in a human cardiac-torso mathematical phantom was simulated with a Monte Carlo code and evaluated to determine the optimum plate thickness by a receiver operating characteristic (ROC) yielding the lowest possible MDA and highest area under the curve (AUC). A partial 3D expectation maximization (EM) reconstruction was developed to improve signal-to-noise ratio (SNR), dynamic range, and spatial resolution over the linear correlation method of reconstruction. This improvement was evaluated by imaging a mini hot rod phantom, simulating the dynamic range, and by performing a bone scan of the C-57 control mouse. Results of the experimental and simulated data as well as other plate designs were analyzed for use as a small animal and potentially human cardiac imaging modality for a radiopharmaceutical developed at Bristol-Myers Squibb Medical Imaging Company, North Billerica, MA, for diagnosing vulnerable plaques. If left untreated, these plaques may rupture causing sudden, unexpected coronary occlusion and death. The results of this research indicated that imaging and reconstructing with this new partial 3D algorithm improved
Segmentation of histological structures for fractal analysis
NASA Astrophysics Data System (ADS)
Dixon, Vanessa; Kouznetsov, Alexei; Tambasco, Mauro
2009-02-01
Pathologists examine histology sections to make diagnostic and prognostic assessments regarding cancer based on deviations in cellular and/or glandular structures. However, these assessments are subjective and exhibit some degree of observer variability. Recent studies have shown that fractal dimension (a quantitative measure of structural complexity) has proven useful for characterizing structural deviations and exhibits great potential for automated cancer diagnosis and prognosis. Computing fractal dimension relies on accurate image segmentation to capture the architectural complexity of the histology specimen. For this purpose, previous studies have used techniques such as intensity histogram analysis and edge detection algorithms. However, care must be taken when segmenting pathologically relevant structures since improper edge detection can result in an inaccurate estimation of fractal dimension. In this study, we established a reliable method for segmenting edges from grayscale images. We used a Koch snowflake, an object of known fractal dimension, to investigate the accuracy of various edge detection algorithms and selected the most appropriate algorithm to extract the outline structures. Next, we created validation objects ranging in fractal dimension from 1.3 to 1.9 imitating the size, structural complexity, and spatial pixel intensity distribution of stained histology section images. We applied increasing intensity thresholds to the validation objects to extract the outline structures and observe the effects on the corresponding segmentation and fractal dimension. The intensity threshold yielding the maximum fractal dimension provided the most accurate fractal dimension and segmentation, indicating that this quantitative method could be used in an automated classification system for histology specimens.
Coded tissue harmonic imaging with nonlinear chirp signals.
Song, Jaehee; Chang, Jin Ho; Song, Tai-kyong; Yoo, Yangmo
2011-05-01
Coded tissue harmonic imaging with pulse inversion (CTHI-PI) based on a linear chirp signal can improve the signal-to-noise ratio with minimizing the peak range sidelobe level (PRSL), which is the main advantage over CTHI with bandpass filtering (CTHI-BF). However, the CTHI-PI technique could suffer from motion artifacts due to decreasing frame rate caused by two firings of opposite phase signals for each scanline. In this paper, a new CTHI method based on a nonlinear chirp signal (CTHI-NC) is presented, which can improve the separation of fundamental and harmonic components without sacrificing frame rate. The nonlinear chirp signal is designed to minimize the PRSL value by optimizing its frequency sweep rate and time duration. The performance of the CTHI-NC method was evaluated by measuring the PRSL and mainlobe width after compression. From the in vitro experiments, the CTHI-NC provided the PRSL of -40.6 dB and the mainlobe width of 2.1 μs for the transmit quadratic nonlinear chirp signal with the center frequency of 2.1 MHz, the fractional bandwidth at -6 dB of 0.6 and the time duration of 15 μs. These results indicate that the proposed method could be used for improving frame rates in CTHI while providing comparable image quality to CTHI-PI.
Adaptive zero-tree structure for curved wavelet image coding
NASA Astrophysics Data System (ADS)
Zhang, Liang; Wang, Demin; Vincent, André
2006-02-01
We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.
A CMOS Imager with Focal Plane Compression using Predictive Coding
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
A Probabilistic Analysis of Sparse Coded Feature Pooling and Its Application for Image Retrieval
Zhang, Yunchao; Chen, Jing; Huang, Xiujie; Wang, Yongtian
2015-01-01
Feature coding and pooling as a key component of image retrieval have been widely studied over the past several years. Recently sparse coding with max-pooling is regarded as the state-of-the-art for image classification. However there is no comprehensive study concerning the application of sparse coding for image retrieval. In this paper, we first analyze the effects of different sampling strategies for image retrieval, then we discuss feature pooling strategies on image retrieval performance with a probabilistic explanation in the context of sparse coding framework, and propose a modified sum pooling procedure which can improve the retrieval accuracy significantly. Further we apply sparse coding method to aggregate multiple types of features for large-scale image retrieval. Extensive experiments on commonly-used evaluation datasets demonstrate that our final compact image representation improves the retrieval accuracy significantly. PMID:26132080
Novel joint source-channel coding for wireless transmission of radiography images.
Watanabe, Katsuhiro; Takizawa, Kenichi; Ikegami, Tetsushi
2010-01-01
A wireless technology is required to realize robust transmission of medical images like a radiography image over noisy environment. The use of error correction technique is essential for realizing such a reliable communication, in which a suitable channel coding is introduced to correct erroneous bits caused by passing through a noisy channel. However, the use of a channel code decreases its efficiency because redundancy bits are also transmitted with information bits. This paper presents a joint source-channel coding which maintains the channel efficiency during transmission of medical images like a radiography image. As medical images under the test, we use typical radiography images in this paper. The joint coding technique enjoys correlations between pixels of the radiography image. The results show that the proposed joint coding provides capability to correcting erroneous bits without increasing the redundancy of the codeword.
Two-layer and Adaptive Entropy Coding Algorithms for H.264-based Lossless Image Coding
2008-04-01
adaptive binary arithmetic coding (CABAC) [7], and context-based adaptive variable length coding (CAVLC) [3], should be adaptively adopted for advancing...Sep. 2006. [7] H. Schwarz, D. Marpe and T. Wiegand, Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard, IEEE
Magnetohydrodynamics of fractal media
Tarasov, Vasily E.
2006-05-15
The fractal distribution of charged particles is considered. An example of this distribution is the charged particles that are distributed over the fractal. The fractional integrals are used to describe fractal distribution. These integrals are considered as approximations of integrals on fractals. Typical turbulent media could be of a fractal structure and the corresponding equations should be changed to include the fractal features of the media. The magnetohydrodynamics equations for fractal media are derived from the fractional generalization of integral Maxwell equations and integral hydrodynamics (balance) equations. Possible equilibrium states for these equations are considered.
Isherwood, Zoey J; Schira, Mark M; Spehar, Branka
2017-02-01
Natural scenes share a consistent distribution of energy across spatial frequencies (SF) known as the 1/f(α) amplitude spectrum (α≈0.8-1.5, mean 1.2). This distribution is scale-invariant, which is a fractal characteristic of natural scenes with statistically similar structure at different spatial scales. While the sensitivity of the visual system to the 1/f properties of natural scenes has been studied extensively using psychophysics, relatively little is known about the tuning of cortical responses to these properties. Here, we use fMRI and retinotopic mapping techniques to measure and analyze BOLD responses in early visual cortex (V1, V2, and V3) to synthetic noise images that vary in their 1/f(α) amplitude spectra (α=0.25 to 2.25, step size: 0.50) and contrast levels (10% and 30%) (Experiment 1). To compare the dependence of the BOLD response between the photometric (intensity based) and geometric (fractal) properties of our stimuli, in Experiment 2 we compared grayscale noise images to their binary (thresholded) counterparts, which contain only black and white regions. In both experiments, early visual cortex responded maximally to stimuli generated to have an input 1/f slope corresponding to natural 1/f(α) amplitude spectra, and lower BOLD responses were found for steeper or shallower 1/f slopes (peak modulation: 0.59% for 1.25 vs. 0.31% for 2.25). To control for changing receptive field sizes, responses were also analyzed across multiple eccentricity bands in cortical surface space. For most eccentricity bands, BOLD responses were maximal for natural 1/f(α) amplitude spectra, but importantly there was no difference in the BOLD response to grayscale stimuli and their corresponding thresholded counterparts. Since the thresholding of an image changes its measured 1/f slope (α) but not its fractal characteristics, this suggests that neuronal responses in early visual cortex are not strictly driven by spectral slope values (photometric properties) but
Application of Fractal Dimension on Palsar Data
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Pant, Triloki
Study of land cover is the primal task of remote sensing where microwave imaging plays an important role. As an alternate of optical imaging, microwave, in particular, Synthetic Aperture Radar (SAR) imaging is very popular. With the advancement of technology, multi-polarized images are now available, e.g., ALOS-PALSAR (Phased Array type L-band SAR), which are beneficial because each of the polarization channel shows different sensitivity to various land features. Further, using the textural features, various land classes can be classified on the basis of the textural measures. One of the textural measure is fractal dimension. It is noteworthy that fractal dimension is a measure of roughness and thus various land classes can be distinguished on the basis of their roughness. The value of fractal dimension for the surfaces lies between 2.0 and 3.0 where 2.0 represents a smooth surface while 3.0 represents drastically rough surface. The study area covers subset images lying between 2956'53"N, 7750'32"E and 2950'40"N, 7757'19"E. The PALSAR images of the year 2007 and 2009 are considered for the study. In present paper a fractal based classification of PALSAR images has been performed for identification of Water, Urban and Agricultural Area. Since fractals represent the image texture, hence the present study attempts to find the fractal properties of land covers to distinguish them from one another. For the purpose a context has been defined on the basis of a moving window, which is used to estimate the local fractal dimension and then moved over the whole image. The size of the window is an important issue for estimation of textural measures which is considered to be 55 in present study. This procedure, in response, produces a textural map called fractal map. The fractal map is constituted with the help of local fractal dimension values and can be used for contextual classification. In order to study the fractal properties of PALSAR images, the three polarization images
Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert
1998-01-01
MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.
Alotaibi, Ghazi S; Wu, Cynthia; Senthilselvan, Ambikaipakan; McMurtry, M Sean
2015-08-01
The purpose of this study was to evaluate the accuracy of using a combination of International Classification of Diseases (ICD) diagnostic codes and imaging procedure codes for identifying deep vein thrombosis (DVT) and pulmonary embolism (PE) within administrative databases. Information from the Alberta Health (AH) inpatients and ambulatory care administrative databases in Alberta, Canada was obtained for subjects with a documented imaging study result performed at a large teaching hospital in Alberta to exclude venous thromboembolism (VTE) between 2000 and 2010. In 1361 randomly-selected patients, the proportion of patients correctly classified by AH administrative data, using both ICD diagnostic codes and procedure codes, was determined for DVT and PE using diagnoses documented in patient charts as the gold standard. Of the 1361 patients, 712 had suspected PE and 649 had suspected DVT. The sensitivities for identifying patients with PE or DVT using administrative data were 74.83% (95% confidence interval [CI]: 67.01-81.62) and 75.24% (95% CI: 65.86-83.14), respectively. The specificities for PE or DVT were 91.86% (95% CI: 89.29-93.98) and 95.77% (95% CI: 93.72-97.30), respectively. In conclusion, when coupled with relevant imaging codes, VTE diagnostic codes obtained from administrative data provide a relatively sensitive and very specific method to ascertain acute VTE.
A blind dual color images watermarking based on IWT and state coding
NASA Astrophysics Data System (ADS)
Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu
2012-04-01
In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.
Fractals in art and nature: why do we like them?
NASA Astrophysics Data System (ADS)
Spehar, Branka; Taylor, Richard P.
2013-03-01
Fractals have experienced considerable success in quantifying the visual complexity exhibited by many natural patterns, and continue to capture the imagination of scientists and artists alike. Fractal patterns have also been noted for their aesthetic appeal, a suggestion further reinforced by the discovery that the poured patterns of the American abstract painter Jackson Pollock are also fractal, together with the findings that many forms of art resemble natural scenes in showing scale-invariant, fractal-like properties. While some have suggested that fractal-like patterns are inherently pleasing because they resemble natural patterns and scenes, the relation between the visual characteristics of fractals and their aesthetic appeal remains unclear. Motivated by our previous findings that humans display a consistent preference for a certain range of fractal dimension across fractal images of various types we turn to scale-specific processing of visual information to understand this relationship. Whereas our previous preference studies focused on fractal images consisting of black shapes on white backgrounds, here we extend our investigations to include grayscale images in which the intensity variations exhibit scale invariance. This scale-invariance is generated using a 1/f frequency distribution and can be tuned by varying the slope of the rotationally averaged Fourier amplitude spectrum. Thresholding the intensity of these images generates black and white fractals with equivalent scaling properties to the original grayscale images, allowing a direct comparison of preferences for grayscale and black and white fractals. We found no significant differences in preferences between the two groups of fractals. For both set of images, the visual preference peaked for images with the amplitude spectrum slopes from 1.25 to 1.5, thus confirming and extending the previously observed relationship between fractal characteristics of images and visual preference.
A new balanced modulation code for a phase-image-based holographic data storage system
NASA Astrophysics Data System (ADS)
John, Renu; Joseph, Joby; Singh, Kehar
2005-08-01
We propose a new balanced modulation code for coding data pages for phase-image-based holographic data storage systems. The new code addresses the coding subtleties associated with phase-based systems while performing a content-based search in a holographic database. The new code, which is a balanced modulation code, is a modification of the existing 8:12 modulation code, and removes the false hits that occur in phase-based content-addressable systems due to phase-pixel subtractions. We demonstrate the better performance of the new code using simulations and experiments in terms of discrimination ratio while content addressing through a holographic memory. The new code is compared with the conventional coding scheme to analyse the false hits due to subtraction of phase pixels.
Image sequence coding using 3D scene models
NASA Astrophysics Data System (ADS)
Girod, Bernd
1994-09-01
The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.
A novel approach to correct the coded aperture misalignment for fast neutron imaging
Zhang, F. N.; Hu, H. S. Wang, D. M.; Jia, J.; Zhang, T. K.; Jia, Q. G.
2015-12-15
Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.
Feltrin, Gian Pietro; Stramare, Roberto; Miotto, Diego; Giacomini, Dario; Saccavini, Claudio
2004-06-01
Fractal analysis is a quantitative method used to evaluate complex anatomic findings in their elementary component. Its application to biologic images, particularly to cancellous bones, has been well practiced within the past few years. The aims of these applications are to assess changes in bone and the loss of spongious architecture, indicate bone fragility, and to show the increased risk for fracture in primary or secondary osteoporosis. The applications are very promising to help complete the studies that can define bone density (bone mineral density by dual energy x-ray absorptiometry or quantitative computed tomography), and also have the capacity to distinguish the patients with a high or low risk for fracture. Their extension to the clinical fields, to define a test for fracture risk, is still limited by difficult application to the medical quantitative imaging of bones, between correct application at superficial bones and unreliable application to deep bones. The future evolution and validity do not depend upon fractal methods but upon well-detailed imaging of the bones in clinical conditions.
QR code based noise-free optical encryption and decryption of a gray scale image
NASA Astrophysics Data System (ADS)
Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-03-01
In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.
NASA Astrophysics Data System (ADS)
Orbach, R.
1986-02-01
Random structures often exhibit fractal geometry, defined in terms of the mass scaling exponent, D, the fractal dimension. The vibrational dynamics of fractal networks are expressed in terms of the exponent d double bar, the fracton dimensionality. The eigenstates on a fractal network are spatially localized for d double bar less than or equal to 2. The implications of fractal geometry are discussed for thermal transport on fractal networks. The electron-fracton interaction is developed, with a brief outline given for the time dependence of the electronic relaxation on fractal networks. It is suggested that amorphous or glassy materials may exhibit fractal properties at short length scales or, equivalently, at high energies. The calculations of physical properties can be used to test the fractal character of the vibrational excitations in these materials.
Study on GEANT4 code applications to dose calculation using imaging data
NASA Astrophysics Data System (ADS)
Lee, Jeong Ok; Kang, Jeong Ku; Kim, Jhin Kee; Kwon, Hyeong Cheol; Kim, Jung Soo; Kim, Bu Gil; Jeong, Dong Hyeok
2015-07-01
The use of the GEANT4 code has increased in the medical field. Various studies have calculated the patient dose distributions by users the GEANT4 code with imaging data. In present study, Monte Carlo simulations based on DICOM data were performed to calculate the dose absorb in the patient's body. Various visualization tools are installed in the GEANT4 code to display the detector construction; however, the display of DICOM images is limited. In addition, to displaying the dose distributions on the imaging data of the patient is difficult. Recently, the gMocren code, a volume visualization tool for GEANT4 simulation, was developed and has been used in volume visualization of image files. In this study, the imaging based on the dose distributions absorbed in the patients was performed by using the gMocren code. Dosimetric evaluations with were carried out by using thermo luminescent dosimeter and film dosimetry to verify the calculated results.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.
2016-10-01
In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.
Fractal antenna and fractal resonator primer
NASA Astrophysics Data System (ADS)
Cohen, Nathan
2015-03-01
Self-similarity and fractals have opened new and important avenues for antenna and electronic solutions over the last 25 years. This primer provides an introduction to the benefits provided by fractal geometry in antennas, resonators, and related structures. Such benefits include, among many, wider bandwidths, smaller sizes, part-less electronic components, and better performance. Fractals also provide a new generation of optimized design tools, first used successfully in antennas but applicable in a general fashion.
Snapshot 2D tomography via coded aperture x-ray scatter imaging
MacCabe, Kenneth P.; Holmgren, Andrew D.; Tornai, Martin P.; Brady, David J.
2015-01-01
This paper describes a fan beam coded aperture x-ray scatter imaging system which acquires a tomographic image from each snapshot. This technique exploits cylindrical symmetry of the scattering cross section to avoid the scanning motion typically required by projection tomography. We use a coded aperture with a harmonic dependence to determine range, and a shift code to determine cross-range. Here we use a forward-scatter configuration to image 2D objects and use serial exposures to acquire tomographic video of motion within a plane. Our reconstruction algorithm also estimates the angular dependence of the scattered radiance, a step toward materials imaging and identification. PMID:23842254
2012-03-01
Assessment of COMSCAN, A Compton Backscatter Imaging Camera , for the One-Sided Non-Destructive Inspection of Aerospace Compo- nents. Technical report...A PROGRAMMABLE LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Jack G. M. FitzGerald, 2d...LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Presented to the Faculty Department of
Coded illumination for motion-blur free imaging of cells on cell-phone based imaging flow cytometer
NASA Astrophysics Data System (ADS)
Saxena, Manish; Gorthi, Sai Siva
2014-10-01
Cell-phone based imaging flow cytometry can be realized by flowing cells through the microfluidic devices, and capturing their images with an optically enhanced camera of the cell-phone. Throughput in flow cytometers is usually enhanced by increasing the flow rate of cells. However, maximum frame rate of camera system limits the achievable flow rate. Beyond this, the images become highly blurred due to motion-smear. We propose to address this issue with coded illumination, which enables recovery of high-fidelity images of cells far beyond their motion-blur limit. This paper presents simulation results of deblurring the synthetically generated cell/bead images under such coded illumination.
Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.
2017-01-01
Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter. PMID:28117325
Chaos, Fractals, and Polynomials.
ERIC Educational Resources Information Center
Tylee, J. Louis; Tylee, Thomas B.
1996-01-01
Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)
Thamrin, Cindy; Stern, Georgette; Frey, Urs
2010-06-01
There is increasing interest in the study of fractals in medicine. In this review, we provide an overview of fractals, of techniques available to describe fractals in physiological data, and we propose some reasons why a physician might benefit from an understanding of fractals and fractal analysis, with an emphasis on paediatric respiratory medicine where possible. Among these reasons are the ubiquity of fractal organisation in nature and in the body, and how changes in this organisation over the lifespan provide insight into development and senescence. Fractal properties have also been shown to be altered in disease and even to predict the risk of worsening of disease. Finally, implications of a fractal organisation include robustness to errors during development, ability to adapt to surroundings, and the restoration of such organisation as targets for intervention and treatment.
NASA Astrophysics Data System (ADS)
Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.
2017-01-01
Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.
Hashemi, S M; Jagodič, U; Mozaffari, M R; Ejtehadi, M R; Muševič, I; Ravnik, M
2017-01-24
Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.
ERIC Educational Resources Information Center
Fraboni, Michael; Moller, Trisha
2008-01-01
Fractal geometry offers teachers great flexibility: It can be adapted to the level of the audience or to time constraints. Although easily explained, fractal geometry leads to rich and interesting mathematical complexities. In this article, the authors describe fractal geometry, explain the process of iteration, and provide a sample exercise.…
Fractal interpretation of intermittency
Hwa, R.C.
1991-12-01
Implication of intermittency in high-energy collisions is first discussed. Then follows a description of the fractal interpretation of intermittency. A basic quantity with asymptotic fractal behavior is introduced. It is then shown how the factorial moments and the G moments can be expressed in terms of it. The relationship between the intermittency indices and the fractal indices is made explicit.
Characterizing Hyperspectral Imagery (AVIRIS) Using Fractal Technique
NASA Technical Reports Server (NTRS)
Qiu, Hong-Lie; Lam, Nina Siu-Ngan; Quattrochi, Dale
1997-01-01
With the rapid increase in hyperspectral data acquired by various experimental hyperspectral imaging sensors, it is necessary to develop efficient and innovative tools to handle and analyze these data. The objective of this study is to seek effective spatial analytical tools for summarizing the spatial patterns of hyperspectral imaging data. In this paper, we (1) examine how fractal dimension D changes across spectral bands of hyperspectral imaging data and (2) determine the relationships between fractal dimension and image content. It has been documented that fractal dimension changes across spectral bands for the Landsat-TM data and its value [(D)] is largely a function of the complexity of the landscape under study. The newly available hyperspectral imaging data such as that from the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) which has 224 bands, covers a wider spectral range with a much finer spectral resolution. Our preliminary result shows that fractal dimension values of AVIRIS scenes from the Santa Monica Mountains in California vary between 2.25 and 2.99. However, high fractal dimension values (D > 2.8) are found only from spectral bands with high noise level and bands with good image quality have a fairly stable dimension value (D = 2.5 - 2.6). This suggests that D can also be used as a summary statistics to represent the image quality or content of spectral bands.
Wave-front coded optical readout for the MEMS-based uncooled infrared imaging system
NASA Astrophysics Data System (ADS)
Li, Tian; Zhao, Yuejin; Dong, Liquan; Liu, Xiaohua; Jia, Wei; Hui, Mei; Yu, Xiaomei; Gong, Cheng; Liu, Weiyu
2012-11-01
In the space limited infrared imaging system based MEMS, the adjustment of optical readout part is inconvenient. This paper proposed a method of wave-front coding to extend the depth of focus/field of the optical readout system, to solve the problem above, and to reduce the demanding for precision in processing and assemblage of the optical readout system itself as well. The wave-front coded imaging system consists of optical coding and digital decoding. By adding a CPM (Cubic Phase Mask) on the pupil plane, it becomes non-sensitive to defocussing within an extended range. The system has similar PSFs and almost equally blurred intermediate images can be obtained. Sharp images are supposed to be acquired based on image restoration algorithms, with the same PSF as a decoding core. We studied the conventional optical imaging system, which had the same optical performance with the wave-front coding one for comparing. Analogue imaging experiments were carried out. And one PSF was used as a simple direct inverse filter, for imaging restoration. Relatively sharp restored images were obtained. Comparatively, the analogue defocussing images of the conventional system were badly destroyed. Using the decrease of the MTF as a standard, we found the depth of focus/field of the wave-front coding system had been extended significantly.
Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging
NASA Astrophysics Data System (ADS)
Diaz, Nelson; Rueda, Hoover; Arguello, Henry
2016-05-01
Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.
Fast synchronization recovery for lossy image transmission with a suffix-rich Huffman code
NASA Astrophysics Data System (ADS)
Yang, Te-Chung; Kuo, C.-C. Jay
1998-10-01
A new entropy codec, which can recover quickly from the loss of synchronization due to the occurrence of transmission errors, is proposed and applied to wireless image transmission in this research. This entropy codec is designed based on the Huffman code with a careful choice of the assignment of 1's and 0's to each branch of the Huffman tree. The design satisfies the suffix-rich property, i.e. the number of a codeword to be the suffix of other codewords is maximized. After the Huffman coding tree is constructed, the source can be coded by using the traditional Huffman code. Thus, this coder does not introduce any overhead to sacrifice its coding efficiency. Statistically, the decoder can automatically recover the lost synchronization with the shortest error propagation length. Experimental results show that fast synchronization recovery reduces quality degradation on the reconstructed image while maintaining the same coding efficiency.
A comparison of the fractal and JPEG algorithms
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Shahshahani, M.
1991-01-01
A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.
Quantitative evaluation of midpalatal suture maturation via fractal analysis
Kwak, Kyoung Ho; Kim, Yong-Il; Kim, Yong-Deok
2016-01-01
Objective The purpose of this study was to determine whether the results of fractal analysis can be used as criteria for midpalatal suture maturation evaluation. Methods The study included 131 subjects aged over 18 years of age (range 18.1–53.4 years) who underwent cone-beam computed tomography. Skeletonized images of the midpalatal suture were obtained via image processing software and used to calculate fractal dimensions. Correlations between maturation stage and fractal dimensions were calculated using Spearman's correlation coefficient. Optimal fractal dimension cut-off values were determined using a receiver operating characteristic curve. Results The distribution of maturation stages of the midpalatal suture according to the cervical vertebrae maturation index was highly variable, and there was a strong negative correlation between maturation stage and fractal dimension (−0.623, p < 0.001). Fractal dimension was a statistically significant indicator of dichotomous results with regard to maturation stage (area under curve = 0.794, p < 0.001). A test in which fractal dimension was used to predict the resulting variable that splits maturation stages into ABC and D or E yielded an optimal fractal dimension cut-off value of 1.0235. Conclusions There was a strong negative correlation between fractal dimension and midpalatal suture maturation. Fractal analysis is an objective quantitative method, and therefore we suggest that it may be useful for the evaluation of midpalatal suture maturation. PMID:27668195
[Fractal analysis in the diagnosis of breast tumors].
Crişan, D A; Lesaru, M; Dobrescu, R; Vasilescu, C
2007-01-01
Last years studies made by researchers from over the world show that fractal geometry is a viable alternative for image analysis. Fractal features of natural forms give to fractal analysis new valences in various fields, medical imaging being a very important one. This paper intend to prove that fractal dimension, as a way to characterize the complexity of a form, can be used for diagnosis of mammographic lesions classified BI-RADS 4, further investigations being not necessary. The experiments made on 30 cases classified BI-RADS 4 confirmed that 89% of benign lesions have an average fractal dimension under the threshold 1.4, meanwhile malign lesions are characterized, in a similar percentage, by an average fractal dimension over that threshold.
Oliveira, Naiara C; Silva, João H; Barros, Olga A; Pinheiro, Allysson P; Santana, William; Saraiva, Antônio A F; Ferreira, Odair P; Freire, Paulo T C; Paula, Amauri J
2015-10-06
We used here a scanning electron microscopy approach that detected backscattered electrons (BSEs) and X-rays (from ionization processes) along a large-field (LF) scan, applied on a Cretaceous fossil of a shrimp (area ∼280 mm(2)) from the Araripe Sedimentary Basin. High-definition LF images from BSEs and X-rays were essentially generated by assembling thousands of magnified images that covered the whole area of the fossil, thus unveiling morphological and compositional aspects at length scales from micrometers to centimeters. Morphological features of the shrimp such as pleopods, pereopods, and antennae located at near-surface layers (undetected by photography techniques) were unveiled in detail by LF BSE images and in calcium and phosphorus elemental maps (mineralized as hydroxyapatite). LF elemental maps for zinc and sulfur indicated a rare fossilization event observed for the first time in fossils from the Araripe Sedimentary Basin: the mineralization of zinc sulfide interfacing to hydroxyapatite in the fossil. Finally, a dimensional analysis of the phosphorus map led to an important finding: the existence of a fractal characteristic (D = 1.63) for the hydroxyapatite-matrix interface, a result of physical-geological events occurring with spatial scale invariance on the specimen, over millions of years.
NASA Astrophysics Data System (ADS)
Maitra, Sanjit; Gartley, Michael G.; Kerekes, John P.
2012-06-01
Polarimetric image classification is sensitive to object orientation and scattering properties. This paper is a preliminary step to bridge the gap between visible wavelength polarimetric imaging and polarimetric SAR (POLSAR) imaging scattering mechanisms. In visible wavelength polarimetric imaging, the degree of linear polarization (DOLP) is widely used to represent the polarized component of the wave scattered from the objects in the scene. For Polarimetric SAR image representation, the Pauli color coding is used, which is based on linear combinations of scattering matrix elements. This paper presents a relation between DOLP and the Pauli decomposition components from the color coded Pauli reconstructed image based on laboratory measurements and first principle physics based image simulations. The objects in the scene are selected in such a way that it captures the three major scattering mechanisms such as the single or odd bounce, double or even bounce and volume scattering. The comparison is done between visible passive polarimetric imaging, active visible polarimetric imaging and active radio frequency POLSAR. The DOLP images are compared with the Pauli Color coded image with |HH-VV|, |HV|, |HH +VV| as the RGB channels. From the images, it is seen that the regions with high DOLP values showed high values of the HH component. This means the Pauli color coded image showed comparatively higher value of HH component for higher DOLP compared to other polarimetric components implying double bounce reflection. The comparison of the scattering mechanisms will help to create a synergy between POLSAR and visible wavelength polarimetric imaging and the idea can be further extended for image fusion.
NASA Astrophysics Data System (ADS)
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
A Contourlet-Based Embedded Image Coding Scheme on Low Bit-Rate
NASA Astrophysics Data System (ADS)
Song, Haohao; Yu, Songyu
Contourlet transform (CT) is a new image representation method, which can efficiently represent contours and textures in images. However, CT is a kind of overcomplete transform with a redundancy factor of 4/3. If it is applied to image compression straightforwardly, the encoding bit-rate may increase to meet a given distortion. This fact baffles the coding community to develop CT-based image compression techniques with satisfactory performance. In this paper, we analyze the distribution of significant contourlet coefficients in different subbands and propose a new contourlet-based embedded image coding (CEIC) scheme on low bit-rate. The well-known wavelet-based embedded image coding (WEIC) algorithms such as EZW, SPIHT and SPECK can be easily integrated into the proposed scheme by constructing a virtual low frequency subband, modifying the coding framework of WEIC algorithms according to the structure of contourlet coefficients, and adopting a high-efficiency significant coefficient scanning scheme for CEIC scheme. The proposed CEIC scheme can provide an embedded bit-stream, which is desirable in heterogeneous networks. Our experiments demonstrate that the proposed scheme can achieve the better compression performance on low bit-rate. Furthermore, thanks to the contourlet adopted in the proposed scheme, more contours and textures in the coded images are preserved to ensure the superior subjective quality.
Restoring wavefront coded iris image through the optical parameter and regularization filter
NASA Astrophysics Data System (ADS)
Li, Yingjiao; He, Yuqing; Feng, Guangqin; Hou, Yushi; Liu, Yong
2011-11-01
Wavefront coding technology can extend the depth of field of the iris imaging system, but the iris image obtained through the system is coded and blurred and can't be used for the recognition algorithm directly. The paper presents a fuzzy iris image restoration method used in the wavefront coding system. After the restoration, the images can be used for the following processing. Firstly, the wavefront coded imaging system is simulated and the optical parameter is analyzed, through the simulation we can get the system's point spread function (PSF). Secondly, using the blurred iris image and PSF to do a blind restoration to estimate a appropriate PSFe. Finally, based on the return value PSFe of PSF, applying the regularization filter on the blurred image. Experimental results show that the proposed method is simple and has fast processing speed. Compared with the traditional restoration algorithms of Wiener filtering and Lucy-Richardson filtering, the recovery image that got through the regularization filtering is the most similar with the original iris image.
An improved coding technique for image encryption and key management
NASA Astrophysics Data System (ADS)
Wu, Xu; Ma, Jie; Hu, Jiasheng
2005-02-01
An improved chaotic algorithm for image encryption on the basis of conventional chaotic encryption algorithm is proposed. Two keys are presented in our technique. One is called private key, which is fixed and protected in the system. The other is named assistant key, which is public and transferred with the encrypted image together. For different original image, different assistant key should be chosen so that one could get different encrypted key. The updated encryption algorithm not only can resist a known-plaintext attack, but also offers an effective solution for key management. The analyses and the computer simulations show that the security is improved greatly, and can be easily realized with hardware.
Medical Image Compression Using a New Subband Coding Method
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug
1995-01-01
A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.
Edges of Saturn's rings are fractal.
Li, Jun; Ostoja-Starzewski, Martin
2015-01-01
The images recently sent by the Cassini spacecraft mission (on the NASA website http://saturn.jpl.nasa.gov/photos/halloffame/) show the complex and beautiful rings of Saturn. Over the past few decades, various conjectures were advanced that Saturn's rings are Cantor-like sets, although no convincing fractal analysis of actual images has ever appeared. Here we focus on four images sent by the Cassini spacecraft mission (slide #42 "Mapping Clumps in Saturn's Rings", slide #54 "Scattered Sunshine", slide #66 taken two weeks before the planet's Augus't 200'9 equinox, and slide #68 showing edge waves raised by Daphnis on the Keeler Gap) and one image from the Voyager 2' mission in 1981. Using three box-counting methods, we determine the fractal dimension of edges of rings seen here to be consistently about 1.63 ~ 1.78. This clarifies in what sense Saturn's rings are fractal.
Fractal analysis of DNA sequence data
Berthelsen, C.L.
1993-01-01
DNA sequence databases are growing at an almost exponential rate. New analysis methods are needed to extract knowledge about the organization of nucleotides from this vast amount of data. Fractal analysis is a new scientific paradigm that has been used successfully in many domains including the biological and physical sciences. Biological growth is a nonlinear dynamic process and some have suggested that to consider fractal geometry as a biological design principle may be most productive. This research is an exploratory study of the application of fractal analysis to DNA sequence data. A simple random fractal, the random walk, is used to represent DNA sequences. The fractal dimension of these walks is then estimated using the [open quote]sandbox method[close quote]. Analysis of 164 human DNA sequences compared to three types of control sequences (random, base-content matched, and dimer-content matched) reveals that long-range correlations are present in DNA that are not explained by base or dimer frequencies. The study also revealed that the fractal dimension of coding sequences was significantly lower than sequences that were primarily noncoding, indicating the presence of longer-range correlations in functional sequences. The multifractal spectrum is used to analyze fractals that are heterogeneous and have a different fractal dimension for subsets with different scalings. The multifractal spectrum of the random walks of twelve mitochondrial genome sequences was estimated. Eight vertebrate mtDNA sequences had uniformly lower spectra values than did four invertebrate mtDNA sequences. Thus, vertebrate mitochondria show significantly longer-range correlations than to invertebrate mitochondria. The higher multifractal spectra values for invertebrate mitochondria suggest a more random organization of the sequences. This research also includes considerable theoretical work on the effects of finite size, embedding dimension, and scaling ranges.
Compressive optical remote sensing via fractal classification
NASA Astrophysics Data System (ADS)
Sun, Quan-sen; Liu, Ji-xin
2015-11-01
High resolution and large field of view are two major development trends in optical remote sensing imaging. But these trends will cause the difficult problem of mass data processing and remote sensor design under the limitation of conventional sampling method. Therefore, we will propose a novel optical remote sensing imaging method based on compressed sensing theory and fractal feature extraction in this study. We could utilize the result of fractal classification to realize the selectable partitioned image recovery with undersampling measurement. The two experiments illustrate the availability and feasibility of this new method.
Hands-On Fractals and the Unexpected in Mathematics
ERIC Educational Resources Information Center
Gluchoff, Alan
2006-01-01
This article describes a hands-on project in which unusual fractal images are produced using only a photocopy machine and office supplies. The resulting images are an example of the contraction mapping principle.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
NASA Astrophysics Data System (ADS)
Kozlova, A. S.
2016-02-01
Special features of an algorithm for coding-decoding of digital particle holograms and restoration of holographic particle images based on the Gabor wavelet are considered. The method involves the application of the decoded wavelet coefficients for the subsequent restoration of images from digital holograms. Results of approbation of the method to numerically calculated holograms and holograms of plankton particles are presented.
Image gathering and coding for digital restoration: Information efficiency and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar
1989-01-01
Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.
Exploring Fractals in the Classroom.
ERIC Educational Resources Information Center
Naylor, Michael
1999-01-01
Describes an activity involving six investigations. Introduces students to fractals, allows them to study the properties of some famous fractals, and encourages them to create their own fractal artwork. Contains 14 references. (ASK)
Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi
2016-03-18
Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach.
Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi
2016-01-01
Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach. PMID:26999150
Fractals: To Know, to Do, to Simulate.
ERIC Educational Resources Information Center
Talanquer, Vicente; Irazoque, Glinda
1993-01-01
Discusses the development of fractal theory and suggests fractal aggregates as an attractive alternative for introducing fractal concepts. Describes methods for producing metallic fractals and a computer simulation for drawing fractals. (MVL)
NASA Astrophysics Data System (ADS)
Knutson, Paul; Dahlberg, E. Dan
2003-10-01
In examples of fractals such as moon craters, rivers,2 cauliflower,3 and bread,4 the actual growth process of the fractal object is missed. In the simple experiment described here, one can observe and record the growth of calcium carbonate crystals — a ubiquitous material found in marble and seashells — in real time. The video frames can be digitized and analyzed to determine the fractal dimension.
Fractal Geometry of Architecture
NASA Astrophysics Data System (ADS)
Lorenz, Wolfgang E.
In Fractals smaller parts and the whole are linked together. Fractals are self-similar, as those parts are, at least approximately, scaled-down copies of the rough whole. In architecture, such a concept has also been known for a long time. Not only architects of the twentieth century called for an overall idea that is mirrored in every single detail, but also Gothic cathedrals and Indian temples offer self-similarity. This study mainly focuses upon the question whether this concept of self-similarity makes architecture with fractal properties more diverse and interesting than Euclidean Modern architecture. The first part gives an introduction and explains Fractal properties in various natural and architectural objects, presenting the underlying structure by computer programmed renderings. In this connection, differences between the fractal, architectural concept and true, mathematical Fractals are worked out to become aware of limits. This is the basis for dealing with the problem whether fractal-like architecture, particularly facades, can be measured so that different designs can be compared with each other under the aspect of fractal properties. Finally the usability of the Box-Counting Method, an easy-to-use measurement method of Fractal Dimension is analyzed with regard to architecture.
Low-bit-rate subband image coding with matching pursuits
NASA Astrophysics Data System (ADS)
Rabiee, Hamid; Safavian, S. R.; Gardos, Thomas R.; Mirani, A. J.
1998-01-01
In this paper, a novel multiresolution algorithm for low bit-rate image compression is presented. High quality low bit-rate image compression is achieved by first decomposing the image into approximation and detail subimages with a shift-orthogonal multiresolution analysis. Then, at the coarsest resolution level, the coefficients of the transformation are encoded by an orthogonal matching pursuit algorithm with a wavelet packet dictionary. Our dictionary consists of convolutional splines of up to order two for the detail and approximation subbands. The intercorrelation between the various resolutions is then exploited by using the same bases from the dictionary to encode the coefficients of the finer resolution bands at the corresponding spatial locations. To further exploit the spatial correlation of the coefficients, the zero trees of wavelets (EZW) algorithm was used to identify the potential zero trees. The coefficients of the presentation are then quantized and arithmetic encoded at each resolution, and packed into a scalable bit stream structure. Our new algorithm is highly bit-rate scalable, and performs better than the segmentation based matching pursuit and EZW encoders at lower bit rates, based on subjective image quality and peak signal-to-noise ratio.
GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION
NASA Astrophysics Data System (ADS)
Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič
2015-10-01
In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.
Automatic detection of microcalcifications with multi-fractal spectrum.
Ding, Yong; Dai, Hang; Zhang, Hang
2014-01-01
For improving the detection of micro-calcifications (MCs), this paper proposes an automatic detection of MC system making use of multi-fractal spectrum in digitized mammograms. The approach of automatic detection system is based on the principle that normal tissues possess certain fractal properties which change along with the presence of MCs. In this system, multi-fractal spectrum is applied to reveal such fractal properties. By quantifying the deviations of multi-fractal spectrums between normal tissues and MCs, the system can identify MCs altering the fractal properties and finally locate the position of MCs. The performance of the proposed system is compared with the leading automatic detection systems in a mammographic image database. Experimental results demonstrate that the proposed system is statistically superior to most of the compared systems and delivers a superior performance.
Investigations of human EEG response to viewing fractal patterns.
Hagerhall, Caroline M; Laike, Thorbjörn; Taylor, Richard P; Küller, Marianne; Küller, Rikard; Martin, Theodore P
2008-01-01
Owing to the prevalence of fractal patterns in natural scenery and their growing impact on cultures around the world, fractals constitute a common feature of our daily visual experiences, raising an important question: what responses do fractals induce in the observer? We monitored subjects' EEG while they were viewing fractals with different fractal dimensions, and the results show that significant effects could be found in the EEG even by employing relatively simple silhouette images. Patterns with a fractal dimension of 1.3 elicited the most interesting EEG, with the highest alpha in the frontal lobes but also the highest beta in the parietal area, pointing to a complicated interplay between different parts of the brain when experiencing this pattern.
DCT/DST-based transform coding for intra prediction in image/video coding.
Saxena, Ankur; Fernandes, Felix C
2013-10-01
In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.
Adaptive bit truncation and compensation method for EZW image coding
NASA Astrophysics Data System (ADS)
Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao
2003-09-01
The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.
NASA Astrophysics Data System (ADS)
Chang, Kuo-En; Lin, Tang-Huang; Lien, Wei-Hung
2015-04-01
Anthropogenic pollutants or smoke from biomass burning contribute significantly to global particle aggregation emissions, yet their aggregate formation and resulting ensemble optical properties are poorly understood and parameterized in climate models. Particle aggregation refers to formation of clusters in a colloidal suspension. In clustering algorithms, many parameters, such as fractal dimension, number of monomers, radius of monomer, and refractive index real part and image part, will alter the geometries and characteristics of the fractal aggregation and change ensemble optical properties further. The cluster-cluster aggregation algorithm (CCA) is used to specify the geometries of soot and haze particles. In addition, the Generalized Multi-particle Mie (GMM) method is utilized to compute the Mie solution from a single particle to the multi particle case. This computer code for the calculation of the scattering by an aggregate of spheres in a fixed orientation and the experimental data have been made publicly available. This study for the model inputs of optical determination of the monomer radius, the number of monomers per cluster, and the fractal dimension is presented. The main aim in this study is to analyze and contrast several parameters of cluster aggregation aforementioned which demonstrate significant differences of optical properties using the GMM method finally. Keywords: optical properties, fractal aggregation, GMM, CCA
NASA Astrophysics Data System (ADS)
Uzol, Oguz; Hazaveh, Hooman Amiri
2016-11-01
The 3D flow structure within the near wake of a four-iteration fractal square turbulence grid is obtained by combining data from closely-spaced horizontal and vertical two-dimensional PIV measurement planes. For this purpose, the grid is placed inside the entrance of the test section of a suction type wind tunnel. The experiments are conducted at a freestream velocity of 8 m/s, corresponding to a Reynolds number of 9000 based on effective mesh size. The freestream turbulence intensity is about 0.5%. The measurement volume extends about 7 effective mesh sizes downstream of the grid. Within the measurement volume, 1000 vector maps are obtained on each one of the 220 horizontal and 220 vertical 2D PIV planes that are separated from each other by 500 microns, which is close to the in-plane vector spacing of 600 microns. This dataset allowed us to calculate all components of mean velocity, velocity gradient tensor, and vorticity as well as the Reynolds stress tensor except for (v'w') component. Using the analyzed three-dimensional mean flow field data, we are able to observe the three dimensional structure of the wakes of the largest bars that dominate the flow field and the corresponding turbulence generation characteristics. We focus on how the wake geometry, decay characteristics and stress-strain relations get impacted by the presence of smaller wakes surrounding the largest wake regions. We also investigate how 3D mean flow non-uniformities get generated downstream of the grid such as lateral contraction and bulging of mean wake shapes.
Fractal dimension and architecture of trabecular bone.
Fazzalari, N L; Parkinson, I H
1996-01-01
The fractal dimension of trabecular bone was determined for biopsies from the proximal femur of 25 subjects undergoing hip arthroplasty. The average age was 67.7 years. A binary profile of the trabecular bone in the biopsy was obtained from a digitized image. A program written for the Quantimet 520 performed the fractal analysis. The fractal dimension was calculated for each specimen, using boxes whose sides ranged from 65 to 1000 microns in length. The mean fractal dimension for the 25 subjects was 1.195 +/- 0.064 and shows that in Euclidean terms the surface extent of trabecular bone is indeterminate. The Quantimet 520 was also used to perform bone histomorphometric measurements. These were bone volume/total volume (BV/TV) (per cent) = 11.05 +/- 4.38, bone surface/total volume (BS/TV) (mm2/mm3) = 1.90 +/- 0.51, trabecular thickness (Tb.Th) (mm) = 0.12 +/- 0.03, trabecular spacing (Tb.Sp) (mm) = 1.03 +/- 0.36, and trabecular number (Tb.N) (number/mm) = 0.95 +/- 0.25. Pearsons' correlation coefficients showed a statistically significant relationship between the fractal dimension and all the histomorphometric parameters, with BV/TV (r = 0.85, P < 0.0001), BS/TV (r = 0.74, P < 0.0001), Tb.Th (r = 0.50, P < 0.02), Tb.Sp (r = -0.81, P < 0.0001), and Tb.N (r = 0.76, P < 0.0001). This method for calculating fractal dimension shows that trabecular bone exhibits fractal properties over a defined box size, which is within the dimensions of a structural unit for trabecular bone. Therefore, the fractal dimension of trabecular bone provides a measure which does not rely on Euclidean descriptors in order to describe a complex geometry.
NASA Astrophysics Data System (ADS)
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-05-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-01-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders. PMID:27217194
Non-Uniform Contrast and Noise Correction for Coded Source Neutron Imaging
Santos-Villalobos, Hector J; Bingham, Philip R
2012-01-01
Since the first application of neutron radiography in the 1930s, the field of neutron radiography has matured enough to develop several applications. However, advances in the technology are far from concluded. In general, the resolution of scintillator-based detection systems is limited to the $10\\mu m$ range, and the relatively low neutron count rate of neutron sources compared to other illumination sources restricts time resolved measurement. One path toward improved resolution is the use of magnification; however, to date neutron optics are inefficient, expensive, and difficult to develop. There is a clear demand for cost-effective scintillator-based neutron imaging systems that achieve resolutions of $1 \\mu m$ or less. Such imaging system would dramatically extend the application of neutron imaging. For such purposes a coded source imaging system is under development. The current challenge is to reduce artifacts in the reconstructed coded source images. Artifacts are generated by non-uniform illumination of the source, gamma rays, dark current at the imaging sensor, and system noise from the reconstruction kernel. In this paper, we describe how to pre-process the coded signal to reduce noise and non-uniform illumination, and how to reconstruct the coded signal with three reconstruction methods correlation, maximum likelihood estimation, and algebraic reconstruction technique. We illustrates our results with experimental examples.
Color-Coded Super-Resolution Small-Molecule Imaging.
Beuzer, Paolo; La Clair, James J; Cang, Hu
2016-06-02
Although the development of super-resolution microscopy dates back to 1994, its applications have been primarily focused on visualizing cellular structures and targets, including proteins, DNA and sugars. We now report on a system that allows both monitoring of the localization of exogenous small molecules in live cells at low resolution and subsequent super-resolution imaging by using stochastic optical reconstruction microscopy (STORM) on fixed cells. This represents a powerful new tool to understand the dynamics of subcellular trafficking associated with the mode and mechanism of action of exogenous small molecules.
Analysis of image content recognition algorithm based on sparse coding and machine learning
NASA Astrophysics Data System (ADS)
Xiao, Yu
2017-03-01
This paper presents an image classification algorithm based on spatial sparse coding model and random forest. Firstly, SIFT feature extraction of the image; and then use the sparse encoding theory to generate visual vocabulary based on SIFT features, and using the visual vocabulary of SIFT features into a sparse vector; through the combination of regional integration and spatial sparse vector, the sparse vector gets a fixed dimension is used to represent the image; at last random forest classifier for image sparse vectors for training and testing, using the experimental data set for standard test Caltech-101 and Scene-15. The experimental results show that the proposed algorithm can effectively represent the features of the image and improve the classification accuracy. In this paper, we propose an innovative image recognition algorithm based on image segmentation, sparse coding and multi instance learning. This algorithm introduces the concept of multi instance learning, the image as a multi instance bag, sparse feature transformation by SIFT images as instances, sparse encoding model generation visual vocabulary as the feature space is mapped to the feature space through the statistics on the number of instances in bags, and then use the 1-norm SVM to classify images and generate sample weights to select important image features.
Lensless coded-aperture imaging with separable Doubly-Toeplitz masks
NASA Astrophysics Data System (ADS)
DeWeert, Michael J.; Farm, Brian P.
2015-02-01
In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within a reasonable weight and volume. One solution is to use coded apertures-opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, then the images can be decoded and a clear image computed. Recently, computational imaging and the search for a means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, modified uniformly redundant arrays (MURAs), are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: (1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and (2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. We present a new class of coded apertures, separable Doubly-Toeplitz masks, which are efficiently decodable even for very large images-orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-light-modulators. Imaging experiments confirmed the effectiveness of separable Doubly-Toeplitz masks-images collected in natural light of extended outdoor scenes are rendered clearly.
Chen, Jing; Wang, Yongtian; Wu, Hanxiao
2012-10-29
In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l(1) optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l(1) tracker without any optimization.
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
Image enhancement using MCNP5 code and MATLAB in neutron radiography.
Tharwat, Montaser; Mohamed, Nader; Mongy, T
2014-07-01
This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work.
NASA Astrophysics Data System (ADS)
Warchalowski, Wiktor; Krawczyk, Malgorzata J.
2017-03-01
We found the Lindenmayer systems for line graphs built on selected fractals. We show that the fractal dimension of such obtained graphs in all analysed cases is the same as for their original graphs. Both for the original graphs and for their line graphs we identified classes of nodes which reflect symmetry of the graph.
Fractal structures and processes
Bassingthwaighte, J.B.; Beard, D.A.; Percival, D.B.; Raymond, G.M.
1996-06-01
Fractals and chaos are closely related. Many chaotic systems have fractal features. Fractals are self-similar or self-affine structures, which means that they look much of the same when magnified or reduced in scale over a reasonably large range of scales, at least two orders of magnitude and preferably more (Mandelbrot, 1983). The methods for estimating their fractal dimensions or their Hurst coefficients, which summarize the scaling relationships and their correlation structures, are going through a rapid evolutionary phase. Fractal measures can be regarded as providing a useful statistical measure of correlated random processes. They also provide a basis for analyzing recursive processes in biology such as the growth of arborizing networks in the circulatory system, airways, or glandular ducts. {copyright} {ital 1996 American Institute of Physics.}
Fernández, Pedro R.; Galilea, José Luis Lázaro; Vicente, Alfredo Gardel; Muñoz, Ignacio Bravo; Cano García, Ángel E.; Vázquez, Carlos Luna
2012-01-01
This paper presents a fast calibration method to determine the transfer function for spatial correspondences in image transmission devices with Incoherent Optical Fiber Bundles (IOFBs), by performing a scan of the input, using differential patterns generated from a Gray code (Differential Gray-Code Space Encoding, DGSE). The results demonstrate that this technique provides a noticeable reduction in processing time and better quality of the reconstructed image compared to other, previously employed techniques, such as point or fringe scanning, or even other known space encoding techniques. PMID:23012530
Chirp-Coded Ultraharmonic Imaging with a Modified Clinical Intravascular Ultrasound System.
Shekhar, Himanshu; Huntzicker, Steven; Awuor, Ivy; Doyley, Marvin M
2016-11-01
Imaging plaque microvasculature with contrast-enhanced intravascular ultrasound (IVUS) could help clinicians evaluate atherosclerosis and guide therapeutic interventions. In this study, we evaluated the performance of chirp-coded ultraharmonic imaging using a modified IVUS system (iLab™, Boston Scientific/Scimed) equipped with clinically available peripheral and coronary imaging catheters. Flow phantoms perfused with a phospholipid-encapsulated contrast agent were visualized using ultraharmonic imaging at 12 MHz and 30 MHz transmit frequencies. Flow channels with diameters as small as 0.8 mm and 0.5 mm were visualized using the peripheral and coronary imaging catheters. Radio-frequency signals were acquired at standard IVUS rotation speed, which resulted in a frame rate of 30 frames/s. Contrast-to-tissue ratios up to 17.9 ± 1.11 dB and 10.7 ± 2.85 dB were attained by chirp-coded ultraharmonic imaging at 12 MHz and 30 MHz transmit frequencies, respectively. These results demonstrate the feasibility of performing ultraharmonic imaging at standard frame rates with clinically available IVUS catheters using chirp-coded excitation.
Improved coded exposure for enhancing imaging quality and detection accuracy of moving targets
NASA Astrophysics Data System (ADS)
Mao, Baoqi; Chen, Li; Han, Lin; Shen, Weimin
2016-09-01
The blur due to the rapidly relative motion between scene and camera during exposure has the well-known influence on the quality of acquired image and then target detection. An improved coded exposure is introduced in this paper to remove the image blur and obtain high quality image, so that the test accuracy of the surface defect and edge contour of motion objects can be enhanced. The improved exposure method takes advantage of code look-up table to control exposure process and image restoration. The restored images have higher Peak Signal-to-Noise Ratio (PSNR) and Structure SIMilarity (SSIM) than traditional deblur algorithm such as Wiener and regularization filter methods. The edge contour and defect of part samples, which move at constant speed relative to the industry camera used in our experiment, are detected with Sobel operator from the restored images. Experimental results verify that the improved coded exposure is better suitable for imaging moving object and detecting moving target than the traditional.
A new pad-based neutron detector for stereo coded aperture thermal neutron imaging
NASA Astrophysics Data System (ADS)
Dioszegi, I.; Yu, B.; Smith, G.; Schaknowski, N.; Fried, J.; Vanier, P. E.; Salwen, C.; Forman, L.
2014-09-01
A new coded aperture thermal neutron imager system has been developed at Brookhaven National Laboratory. The cameras use a new type of position-sensitive 3He-filled ionization chamber, in which an anode plane is composed of an array of pads with independent acquisition channels. The charge is collected on each of the individual 5x5 mm2 anode pads, (48x48 in total, corresponding to 24x24 cm2 sensitive area) and read out by application specific integrated circuits (ASICs). The new design has several advantages for coded-aperture imaging applications in the field, compared to the previous generation of wire-grid based neutron detectors. Among these are its rugged design, lighter weight and use of non-flammable stopping gas. The pad-based readout occurs in parallel circuits, making it capable of high count rates, and also suitable to perform data analysis and imaging on an event-by-event basis. The spatial resolution of the detector can be better than the pixel size by using a charge sharing algorithm. In this paper we will report on the development and performance of the new pad-based neutron camera, describe a charge sharing algorithm to achieve sub-pixel spatial resolution and present the first stereoscopic coded aperture images of thermalized neutron sources using the new coded aperture thermal neutron imager system.
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet
Context Tree-Based Image Contour Coding Using a Geometric Prior
NASA Astrophysics Data System (ADS)
Zheng, Amin; Cheung, Gene; Florencio, Dinei
2017-02-01
If object contours in images are coded efficiently as side information, then they can facilitate advanced image / video coding techniques, such as graph Fourier transform coding or motion prediction of arbitrarily shaped pixel blocks. In this paper, we study the problem of lossless and lossy compression of detected contours in images. Specifically, we first convert a detected object contour composed of contiguous between-pixel edges to a sequence of directional symbols drawn from a small alphabet. To encode the symbol sequence using arithmetic coding, we compute an optimal variable-length context tree (VCT) $\\mathcal{T}$ via a maximum a posterior (MAP) formulation to estimate symbols' conditional probabilities. MAP prevents us from overfitting given a small training set $\\mathcal{X}$ of past symbol sequences by identifying a VCT $\\mathcal{T}$ that achieves a high likelihood $P(\\mathcal{X}|\\mathcal{T})$ of observing $\\mathcal{X}$ given $\\mathcal{T}$, and a large geometric prior $P(\\mathcal{T})$ stating that image contours are more often straight than curvy. For the lossy case, we design efficient dynamic programming (DP) algorithms that optimally trade off coding rate of an approximate contour $\\hat{\\mathbf{x}}$ given a VCT $\\mathcal{T}$ with two notions of distortion of $\\hat{\\mathbf{x}}$ with respect to the original contour $\\mathbf{x}$. To reduce the size of the DP tables, a total suffix tree is derived from a given VCT $\\mathcal{T}$ for compact table entry indexing, reducing complexity. Experimental results show that for lossless contour coding, our proposed algorithm outperforms state-of-the-art context-based schemes consistently for both small and large training datasets. For lossy contour coding, our algorithms outperform comparable schemes in the literature in rate-distortion performance.
Fractal Metrology for biogeosystems analysis
NASA Astrophysics Data System (ADS)
Torres-Argüelles, V.; Oleschko, K.; Tarquis, A. M.; Korvin, G.; Gaona, C.; Parrot, J.-F.; Ventura-Ramos, E.
2010-11-01
The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes of this complex system. In the present research, we studied the aggregation process as self-organizing and operating near a critical point. The structural pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate" Clay) and compared in terms of roughness of the gray-intensity distribution quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them measured in terms of standard deviation. Some of the applied methods are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc.) while the others have been recently developed by our Group. The combination of these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM). We show the usefulness of FM for complex systems analysis through a case study of the soil's physical and chemical degradation applying the selected toolbox to describe and compare the structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.
Fractal metrology for biogeosystems analysis
NASA Astrophysics Data System (ADS)
Torres-Argüelles, V.; Oleschko, K.; Tarquis, A. M.; Korvin, G.; Gaona, C.; Parrot, J.-F.; Ventura-Ramos, E.
2010-06-01
The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes. In the present research, this pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate'' Clay) and compared in terms of roughness of the gray-intensity distribution (the measurand) quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them and to the measurement function which best fits to the experimental results. Some of the applied techniques are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc.) while the others have been recently developed by our Group. The combination of all these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM). We show the usefulness of FM through a case study of soil physical and chemical degradation applying the selected toolbox to describe and compare the main structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.
Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging
Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin
2015-01-01
We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205
Quantum image pseudocolor coding based on the density-stratified method
NASA Astrophysics Data System (ADS)
Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na
2015-05-01
Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.
Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei
2012-11-01
Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.
Hexagonal Uniformly Redundant Arrays (HURAs) for scintillator based coded aperture neutron imaging
Gamage, K.A.A.; Zhou, Q.
2015-07-01
A series of Monte Carlo simulations have been conducted, making use of the EJ-426 neutron scintillator detector, to investigate the potential of using hexagonal uniformly redundant arrays (HURAs) for scintillator based coded aperture neutron imaging. This type of scintillator material has a low sensitivity to gamma rays, therefore, is of particular use in a system with a source that emits both neutrons and gamma rays. The simulations used an AmBe source, neutron images have been produced using different coded-aperture materials (boron- 10, cadmium-113 and gadolinium-157) and location error has also been estimated. In each case the neutron image clearly shows the location of the source with a relatively small location error. Neutron images with high resolution can be easily used to identify and locate nuclear materials precisely in nuclear security and nuclear decommissioning applications. (authors)
Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei; Chen Da; Li Zhenghong; Wu Yuelei
2012-11-15
Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.
Wang, Xiaogang; Chen, Wen; Chen, Xudong
2015-03-09
In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.
Projection based image restoration, super-resolution and error correction codes
NASA Astrophysics Data System (ADS)
Bauer, Karl Gregory
Super-resolution is the ability of a restoration algorithm to restore meaningful spatial frequency content beyond the diffraction limit of the imaging system. The Gerchberg-Papoulis (GP) algorithm is one of the most celebrated algorithms for super-resolution. The GP algorithm is conceptually simple and demonstrates the importance of using a priori information in the formation of the object estimate. In the first part of this dissertation the continuous GP algorithm is discussed in detail and shown to be a projection on convex sets algorithm. The discrete GP algorithm is shown to converge in the exactly-, over- and under-determined cases. A direct formula for the computation of the estimate at the kth iteration and at convergence is given. This analysis of the discrete GP algorithm sets the stage to connect super-resolution to error-correction codes. Reed-Solomon codes are used for error-correction in magnetic recording devices, compact disk players and by NASA for space communications. Reed-Solomon codes have a very simple description when analyzed with the Fourier transform. This signal processing approach to error- correction codes allows the error-correction problem to be compared with the super-resolution problem. The GP algorithm for super-resolution is shown to be equivalent to the correction of errors with a Reed-Solomon code over an erasure channel. The Restoration from Magnitude (RFM) problem seeks to recover a signal from the magnitude of the spectrum. This problem has applications to imaging through a turbulent atmosphere. The turbulent atmosphere causes localized changes in the index of refraction and introduces different phase delays in the data collected. Synthetic aperture radar (SAR) and hyperspectral imaging systems are capable of simultaneously recording multiple images of different polarizations or wavelengths. Each of these images will experience the same turbulent atmosphere and have a common phase distortion. A projection based restoration
NASA Astrophysics Data System (ADS)
Li, Dunling; Loew, Murray H.
2004-05-01
This paper provides a theoretical foundation for the closed-form expression of model observers on compressed images. In medical applications, model observers, especially the channelized Hotelling observer, have been successfully used to predict human observer performance and to evaluate image quality for detection tasks in various backgrounds. To use model observers, however, requires knowledge of noise statistics. This paper first identifies quantization noise as the sole distortion source in transform coding, one of the most commonly used methods for image compression. Then, it represents transform coding as a 1-D block-based matrix expression, it further derives first and second moments, and the probability density function (pdf) of the compression noise at pixel, block and image levels. The compression noise statistics depend on the transform matrix and the quantization matrix in the transform coding algorithm. Compression noise is jointly normally distributed when the dimension of the transform (the block size) is typical and the contents of image sets vary randomly. Moreover, this paper uses JPEG as a test example to verify the derived statistics. The test simulation results show that the closed-form expression of JPEG quantization and compression noise statistics correctly predicts the estimated ones from actual images.
Joint sparse coding based spatial pyramid matching for classification of color medical image.
Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin
2015-04-01
Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification.
Wavelet Kernels on a DSP: A Comparison between Lifting and Filter Banks for Image Coding
NASA Astrophysics Data System (ADS)
Gnavi, Stefano; Penna, Barbara; Grangetto, Marco; Magli, Enrico; Olmo, Gabriella
2002-12-01
We develop wavelet engines on a digital signal processors (DSP) platform, the target application being image and intraframe video compression by means of the forthcoming JPEG2000 and Motion-JPEG2000 standards. We describe two implementations, based on the lifting scheme and the filter bank scheme, respectively, and we present experimental results on code profiling. In particular, we address the following problems: (1) evaluating the execution speed of a wavelet engine on a modern DSP; (2) comparing the actual execution speed of the lifting scheme and the filter bank scheme with the theoretical results; (3) using the on-board direct memory access (DMA) to possibly optimize the execution speed. The results allow to assess the performance of a modern DSP in the image coding task, as well as to compare the lifting and filter bank performance in a realistic application scenario. Finally, guidelines for optimizing the code efficiency are provided by investigating the possible use of the on-board DMA.
NASA Technical Reports Server (NTRS)
Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron
2011-01-01
A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.
Ultrasonic imaging of human tooth using chirp-coded nonlinear time reversal acoustics
NASA Astrophysics Data System (ADS)
Santos, Serge Dos; Domenjoud, Mathieu; Prevorovsky, Zdenek
2010-01-01
We report in this paper the first use of TR-NEWS, included chirp-coded excitation and applied for ultrasonic imaging of human tooth. Feasibility of the focusing of ultrasound at the surface of the human tooth is demonstrated and potentiality of a new echodentography of the dentine-enamel interface using TR-NEWS is discussed.
Fast-neutron coded-aperture imaging of special nuclear material configurations
P. A. Hausladen; M. A. Blackston; E. Brubaker; D. L. Chichester; P. Marleau; R. J. Newby
2012-07-01
In the past year, a prototype fast-neutron coded-aperture imager has been developed that has sufficient efficiency and resolution to make the counting of warheads for possible future treaty confirmation scenarios via their fission-neutron emissions practical. The imager is constructed from custom-built pixelated liquid scintillator detectors. The liquid scintillator detectors enable neutron-gamma discrimination via pulse shape, and the pixelated construction enables a sufficient number of pixels for imaging in a compact detector with a manageable number of channels of readout electronics. The imager has been used to image neutron sources at ORNL, special nuclear material (SNM) sources at the Idaho National Laboratory (INL) Zero Power Physics Reactor (ZPPR) facility, and neutron source and shielding configurations at Sandia National Laboratories. This paper reports on the design and construction of the imager, characterization measurements with neutron sources at ORNL, and measurements with SNM at the INL ZPPR facility.
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression
Fractal frontiers in cardiovascular magnetic resonance: towards clinical implementation.
Captur, Gabriella; Karperien, Audrey L; Li, Chunming; Zemrak, Filip; Tobon-Gomez, Catalina; Gao, Xuexin; Bluemke, David A; Elliott, Perry M; Petersen, Steffen E; Moon, James C
2015-09-07
Many of the structures and parameters that are detected, measured and reported in cardiovascular magnetic resonance (CMR) have at least some properties that are fractal, meaning complex and self-similar at different scales. To date however, there has been little use of fractal geometry in CMR; by comparison, many more applications of fractal analysis have been published in MR imaging of the brain.This review explains the fundamental principles of fractal geometry, places the fractal dimension into a meaningful context within the realms of Euclidean and topological space, and defines its role in digital image processing. It summarises the basic mathematics, highlights strengths and potential limitations of its application to biomedical imaging, shows key current examples and suggests a simple route for its successful clinical implementation by the CMR community.By simplifying some of the more abstract concepts of deterministic fractals, this review invites CMR scientists (clinicians, technologists, physicists) to experiment with fractal analysis as a means of developing the next generation of intelligent quantitative cardiac imaging tools.
Experimental design and analysis of JND test on coded image/video
NASA Astrophysics Data System (ADS)
Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay
2015-09-01
The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.
Model-based coding of facial images based on facial muscle motion through isodensity maps
NASA Astrophysics Data System (ADS)
So, Ikken; Nakamura, Osamu; Minami, Toshi
1991-11-01
A model-based coding system has come under serious consideration for the next generation of image coding schemes, aimed at greater efficiency in TV telephone and TV conference systems. In this model-based coding system, the sender's model image is transmitted and stored at the receiving side before the start of the conversation. During the conversation, feature points are extracted from the facial image of the sender and are transmitted to the receiver. The facial expression of the sender facial is reconstructed from the feature points received and a wireframed model constructed at the receiving side. However, the conventional methods have the following problems: (1) Extreme changes of the gray level, such as in wrinkles caused by change of expression, cannot be reconstructed at the receiving side. (2) Extraction of stable feature points from facial images with irregular features such as spectacles or facial hair is very difficult. To cope with the first problem, a new algorithm based on isodensity lines which can represent detailed changes in expression by density correction has already been proposed and good results obtained. As for the second problem, we propose in this paper a new algorithm to reconstruct facial images by transmitting other feature points extracted from isodensity maps.
Preliminary study of nuclear fuel element testing based on coded source neutron imaging
Sheng Wang; Hang Li; Chao Cao; Yang Wu; Heyong Huo; Bin Tang
2015-07-01
Neutron radiography (NR) is one of the most important nondestructive testing methods, which is sensitive to low density materials. Especially, Neutron transfer imaging method could be used to test radioactivity materials refraining from γ effect, but it is difficult to realize tomography. Coded source neutron imaging (CSNI) is a newly NR method developed fast in the last several years. The distance between object and detector is much longer than traditional NR, which could be used to test radioactivity materials. With pre-reconstruction process from fold-cover projections, CSNI could easily realize tomography. This thesis carries out preliminary study on the nuclear fuel element testing by coded source neutron imaging. We calculate different enrichment, flaws and activity in nuclear fuel elements tested by CSNI with Monte-Carlo simulation. The results show that CSNI could be a useful testing method for nuclear fuel element testing. (authors)
Boulgouris, N V; Tzovaras, D; Strintzis, M G
2001-01-01
The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
A coded structured light system based on primary color stripe projection and monochrome imaging.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-10-14
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.
NASA Astrophysics Data System (ADS)
Strichartz, Robert S.; Usher, Michael
2000-09-01
A general theory of piecewise multiharmonic splines is constructed for a class of fractals (post-critically finite) that includes the familiar Sierpinski gasket, based on Kigami's theory of Laplacians on these fractals. The spline spaces are the analogues of the spaces of piecewise Cj polynomials of degree 2j + 1 on an interval, with nodes at dyadic rational points. We give explicit algorithms for effectively computing multiharmonic functions (solutions of [Delta]j+1u = 0) and for constructing bases for the spline spaces (for general fractals we need to assume that j is odd), and also for computing inner products of these functions. This enables us to give a finite element method for the approximate solution of fractal differential equations. We give the analogue of Simpson's method for numerical integration on the Sierpinski gasket. We use splines to approximate functions vanishing on the boundary by functions vanishing in a neighbourhood of the boundary.
ERIC Educational Resources Information Center
Clark, Garry
1999-01-01
Reports on a mathematical investigation of fractals and highlights the thinking involved, problem solving strategies used, generalizing skills required, the role of technology, and the role of mathematics. (ASK)
ERIC Educational Resources Information Center
Bannon, Thomas J.
1991-01-01
Discussed are several different transformations based on the generation of fractals including self-similar designs, the chaos game, the koch curve, and the Sierpinski Triangle. Three computer programs which illustrate these concepts are provided. (CW)
NASA Astrophysics Data System (ADS)
West, Bruce J.
The proper methodology for describing the dynamics of certain complex phenomena and fractal time series is the fractional calculus through the fractional Langevin equation discussed herein and applied in a biomedical context. We show that a fractional operator (derivative or integral) acting on a fractal function, yields another fractal function, allowing us to construct a fractional Langevin equation to describe the evolution of a fractal statistical process, for example, human gait and cerebral blood flow. The goal of this talk is to make clear how certain complex phenomena, such as those that are abundantly present in human physiology, can be faithfully described using dynamical models involving fractional differential stochastic equations. These models are tested against existing data sets and shown to describe time series from complex physiologic phenomena quite well.
Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging
NASA Astrophysics Data System (ADS)
Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong
2015-02-01
Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.
Coded aperture x-ray diffraction imaging with transmission computed tomography side-information
NASA Astrophysics Data System (ADS)
Odinaka, Ikenna; Greenberg, Joel A.; Kaganovsky, Yan; Holmgren, Andrew; Hassan, Mehadi; Politte, David G.; O'Sullivan, Joseph A.; Carin, Lawrence; Brady, David J.
2016-03-01
Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.
Park, Jinhyoung; Lee, Jungwoo; Lau, Sien Ting; Lee, Changyang; Huang, Ying; Lien, Ching-Ling; Kirk Shung, K
2012-04-01
Acoustic radiation force impulse (ARFI) imaging has been developed as a non-invasive method for quantitative illustration of tissue stiffness or displacement. Conventional ARFI imaging (2-10 MHz) has been implemented in commercial scanners for illustrating elastic properties of several organs. The image resolution, however, is too coarse to study mechanical properties of micro-sized objects such as cells. This article thus presents a high-frequency coded excitation ARFI technique, with the ultimate goal of displaying elastic characteristics of cellular structures. Tissue mimicking phantoms and zebrafish embryos are imaged with a 100-MHz lithium niobate (LiNbO₃) transducer, by cross-correlating tracked RF echoes with the reference. The phantom results show that the contrast of ARFI image (14 dB) with coded excitation is better than that of the conventional ARFI image (9 dB). The depths of penetration are 2.6 and 2.2 mm, respectively. The stiffness data of the zebrafish demonstrate that the envelope is harder than the embryo region. The temporal displacement change at the embryo and the chorion is as large as 36 and 3.6 μm. Consequently, this high-frequency ARFI approach may serve as a remote palpation imaging tool that reveals viscoelastic properties of small biological samples.
Milovanovic, Petar; Djuric, Marija; Rakocevic, Zlatko
2012-01-01
There is an increasing interest in bone nano-structure, the ultimate goal being to reveal the basis of age-related bone fragility. In this study, power spectral density (PSD) data and fractal dimensions of the mineralized bone matrix were extracted from atomic force microscope topography images of the femoral neck trabeculae. The aim was to evaluate age-dependent differences in the mineralized matrix of human bone and to consider whether these advanced nano-descriptors might be linked to decreased bone remodeling observed by some authors and age-related decline in bone mechanical competence. The investigated bone specimens belonged to a group of young adult women (n = 5, age: 20–40 years) and a group of elderly women (n = 5, age: 70–95 years) without bone diseases. PSD graphs showed the roughness density distribution in relation to spatial frequency. In all cases, there was a fairly linear decrease in magnitude of the power spectra with increasing spatial frequencies. The PSD slope was steeper in elderly individuals (−2.374 vs. −2.066), suggesting the dominance of larger surface morphological features. Fractal dimension of the mineralized bone matrix showed a significant negative trend with advanced age, declining from 2.467 in young individuals to 2.313 in the elderly (r = 0.65, P = 0.04). Higher fractal dimension in young women reflects domination of smaller mineral grains, which is compatible with the more freshly remodeled structure. In contrast, the surface patterns in elderly individuals were indicative of older tissue age. Lower roughness and reduced structural complexity (decreased fractal dimension) of the interfibrillar bone matrix in the elderly suggest a decline in bone toughness, which explains why aged bone is more brittle and prone to fractures. PMID:22946475
NASA Astrophysics Data System (ADS)
Turcotte, Donald L.
Tectonic processes build landforms that are subsequently destroyed by erosional processes. Landforms exhibit fractal statistics in a variety of ways; examples include (1) lengths of coast lines; (2) number-size statistics of lakes and islands; (3) spectral behavior of topography and bathymetry both globally and locally; and (4) branching statistics of drainage networks. Erosional processes are dominant in the development of many landforms on this planet, but similar fractal statistics are also applicable to the surface of Venus where minimal erosion has occurred. A number of dynamical systems models for landforms have been proposed, including (1) cellular automata; (2) diffusion limited aggregation; (3) self-avoiding percolation; and (4) advective-diffusion equations. The fractal statistics and validity of these models will be discussed. Earthquakes also exhibit fractal statistics. The frequency-magnitude statistics of earthquakes satisfy the fractal Gutenberg-Richter relation both globally and locally. Earthquakes are believed to be a classic example of self-organized criticality. One model for earthquakes utilizes interacting slider-blocks. These slider block models have been shown to behave chaotically and to exhibit self-organized criticality. The applicability of these models will be discussed and alternative approaches will be presented. Fragmentation has been demonstrated to produce fractal statistics in many cases. Comminution is one model for fragmentation that yields fractal statistics. It has been proposed that comminution is also responsible for much of the deformation in the earth's crust. The brittle disruption of the crust and the resulting earthquakes present an integrated problem with many fractal aspects.
Baish, J W; Jain, R K
2000-07-15
Recent studies have shown that fractal geometry, a vocabulary of irregular shapes, can be useful for describing the pathological architecture of tumors and, perhaps more surprisingly, for yielding insights into the mechanisms of tumor growth and angiogenesis that complement those obtained by modern molecular methods. This article outlines the basic methods of fractal geometry and discusses the value and limitations of applying this new tool to cancer research.
Increasing average power in medical ultrasonic endoscope imaging system by coded excitation
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Zhou, Hao; Wen, Shijie; Yu, Daoyin
2008-12-01
Medical ultrasonic endoscope is the combination of electronic endoscope and ultrasonic sensor technology. Ultrasonic endoscope sends the ultrasonic probe into coelom through biopsy channel of electronic endoscope and rotates it by a micro pre-motor, which requires that the length of ultrasonic probe is no more than 14mm and the diameter is no more than 2.2mm. As a result, the ultrasonic excitation power is very low and it is difficult to obtain a sharp image. In order to increase the energy and SNR of ultrasonic signal, we introduce coded excitation into the ultrasonic imaging system, which is widely used in radar system. Coded excitation uses a long coded pulse to drive ultrasonic transducer, which can increase the average transmitting power accordingly. In this paper, in order to avoid the overlapping between adjacent echo, we used a four-figure Barker code to drive the ultrasonic transducer, which is modulated at the operating frequency of transducer to improve the emission efficiency. The implementation of coded excitation is closely associated with the transient operating characteristic of ultrasonic transducer. In this paper, the transient operating characteristic of ultrasonic transducer excited by a shock pulse δ(t) is firstly analyzed, and then the exciting pulse generated by special ultrasonic transmitting circuit composing of MD1211 and TC6320. In the final part of the paper, we designed an experiment to validate the coded excitation with transducer operating at 5MHz and a glass filled with ultrasonic coupling liquid as the object. Driven by a FPGA, the ultrasonic transmitting circuit output a four-figure Barker excitation pulse modulated at 5MHz, +/-20 voltage and is consistent with the transient operating characteristic of ultrasonic transducer after matched by matching circuit. The reflected echo from glass possesses coded character, which is identical with the simulating result by Matlab. Furthermore, the signal's amplitude is higher.
Rheological and fractal hydrodynamics of aerobic granules.
Tijani, H I; Abdullah, N; Yuzir, A; Ujang, Zaini
2015-06-01
The structural and hydrodynamic features for granules were characterized using settling experiments, predefined mathematical simulations and ImageJ-particle analyses. This study describes the rheological characterization of these biologically immobilized aggregates under non-Newtonian flows. The second order dimensional analysis defined as D2=1.795 for native clusters and D2=1.099 for dewatered clusters and a characteristic three-dimensional fractal dimension of 2.46 depicts that these relatively porous and differentially permeable fractals had a structural configuration in close proximity with that described for a compact sphere formed via cluster-cluster aggregation. The three-dimensional fractal dimension calculated via settling-fractal correlation, U∝l(D) to characterize immobilized granules validates the quantitative measurements used for describing its structural integrity and aggregate complexity. These results suggest that scaling relationships based on fractal geometry are vital for quantifying the effects of different laminar conditions on the aggregates' morphology and characteristics such as density, porosity, and projected surface area.
Context Tree-Based Image Contour Coding Using a Geometric Prior.
Zheng, Amin; Cheung, Gene; Florencio, Dinei
2017-02-01
Efficient encoding of object contours in images can facilitate advanced image/video compression techniques, such as shape-adaptive transform coding or motion prediction of arbitrarily shaped pixel blocks. We study the problem of lossless and lossy compression of detected contours in images. Specifically, we first convert a detected object contour into a sequence of directional symbols drawn from a small alphabet. To encode the symbol sequence using arithmetic coding, we compute an optimal variable-length context tree (VCT) T via a maximum a posterior (MAP) formulation to estimate symbols' conditional probabilities. MAP can avoid overfitting given a small training set X of past symbol sequences by identifying a VCT T with high likelihood P(X|T) of observing X given T , using a geometric prior P(T) stating that image contours are more often straight than curvy. For the lossy case, we design fast dynamic programming (DP) algorithms that optimally trade off coding rate of an approximate contour [Formula: see text] given a VCT T with two notions of distortion of [Formula: see text] with respect to the original contour x. To reduce the size of the DP tables, a total suffix tree is derived from a given VCT T for compact table entry indexing, reducing complexity. Experimental results show that for lossless contour coding, our proposed algorithm outperforms state-of-the-art context-based schemes consistently for both small and large training datasets. For lossy contour coding, our algorithms outperform comparable schemes in the literature in rate-distortion performance.
Retinal fractals and acute lacunar stroke.
Cheung, Ning; Liew, Gerald; Lindley, Richard I; Liu, Erica Y; Wang, Jie Jin; Hand, Peter; Baker, Michelle; Mitchell, Paul; Wong, Tien Y
2010-07-01
This study aimed to determine whether retinal fractal dimension, a quantitative measure of microvascular branching complexity and density, is associated with lacunar stroke. A total of 392 patients presenting with acute ischemic stroke had retinal fractal dimension measured from digital photographs, and lacunar infarct ascertained from brain imaging. After adjusting for age, gender, and vascular risk factors, higher retinal fractal dimension (highest vs lowest quartile and per standard deviation increase) was independently and positively associated with lacunar stroke (odds ratio [OR], 4.27; 95% confidence interval [CI], 1.49-12.17 and OR, 1.85; 95% CI, 1.20-2.84, respectively). Increased retinal microvascular complexity and density is associated with lacunar stroke.
Fractal characterization of wear-erosion surfaces
Rawers, James C.; Tylczak, Joseph H.
1999-12-01
Wear erosion is a complex phenomenon resulting in highly distorted and deformed surface morphologies. Most wear surface features have been described only qualitatively. In this study wear surfaces features were quantified using fractal analysis. The ability to assign numerical values to wear-erosion surfaces makes possible mathematical expressions that will enable wear mechanisms to be predicted and understood. Surface characterization came from wear-erosion experiments that included varying the erosive materials, the impact velocity, and the impact angle. Seven fractal analytical techniques were applied to micrograph images of wear-erosion surfaces. Fourier analysis was the most promising. Fractal values obtained were consistent with visual observations and provided a unique wear-erosion parameter unrelated to wear rate.
Fractal characterization of wear-erosion surfaces
Rawers, J.; Tylczak, J.
1999-12-01
Wear erosion is a complex phenomenon resulting in highly distorted and deformed surface morphologies. Most wear surface features have been described only qualitatively. In this study wear surfaces features were quantified using fractal analysis. The ability to assign numerical values to wear-erosion surfaces makes possible mathematical expressions that will enable wear mechanisms to be predicted and understood. Surface characterization came from wear-erosion experiments that included varying the erosive materials, the impact velocity, and the impact angle. Seven fractal analytical techniques were applied to micrograph images of wear-erosion surfaces. Fourier analysis was the most promising. Fractal values obtained were consistent with visual observations and provided a unique wear-erosion parameter unrelated to wear rate. In this study stainless steel was evaluated as a function of wear erosion conditions.
``the Human BRAIN & Fractal quantum mechanics''
NASA Astrophysics Data System (ADS)
Rosary-Oyong, Se, Glory
In mtDNA ever retrieved from Iman Tuassoly, et.al:Multifractal analysis of chaos game representation images of mtDNA''.Enhances the price & valuetales of HE. Prof. Dr-Ing. B.J. HABIBIE's N-219, in J. Bacteriology, Nov 1973 sought:'' 219 exist as separate plasmidDNA species in E.coli & Salmonella panama'' related to ``the brain 2 distinct molecular forms of the (Na,K)-ATPase..'' & ``neuron maintains different concentration of ions(charged atoms'' thorough Rabi & Heisenber Hamiltonian. Further, after ``fractal space time are geometric analogue of relativistic quantum mechanics''[Ord], sought L.Marek Crnjac: ``Chaotic fractals at the root of relativistic quantum physics''& from famous Nottale: ``Scale relativity & fractal space-time:''Application to Quantum Physics , Cosmology & Chaotic systems'',1995. Acknowledgements to HE. Mr. H. TUK SETYOHADI, Jl. Sriwijaya Raya 3, South-Jakarta, INDONESIA.
Trabecular Pattern Analysis Using Fractal Dimension
NASA Astrophysics Data System (ADS)
Ishida, Takayuki; Yamashita, Kazuya; Takigawa, Atsushi; Kariya, Komyo; Itoh, Hiroshi
1993-04-01
Feature extraction from a digitized image is advantageous for the detection of signs of disease. In this work, we attempted to evaluate bone trabecular pattern changes in osteoporosis using the fractal dimension and the root mean square (RMS) values. The relationship between the fractal dimension and the 1st moment of the power spectrum is explored, and we investigated the relationship between the results of this analysis and the bone mineral density (BMD) value which was measured using dual-energy X-ray absorptiometry (DEXA). As a result, we were able to extract useful information, using the fractal dimension and the RMS value of the radiographs (lateral view of the lumbar vertebrae), for the diagnosis of osteoporosis. Abnormal clinical cases were separated from normal cases based on the evaluation values. Negligible correlation between the BMD value and these indexes was observed.
Fractal design concepts for stretchable electronics.
Fan, Jonathan A; Yeo, Woon-Hong; Su, Yewang; Hattori, Yoshiaki; Lee, Woosik; Jung, Sung-Young; Zhang, Yihui; Liu, Zhuangjian; Cheng, Huanyu; Falgout, Leo; Bajema, Mike; Coleman, Todd; Gregoire, Dan; Larsen, Ryan J; Huang, Yonggang; Rogers, John A
2014-01-01
Stretchable electronics provide a foundation for applications that exceed the scope of conventional wafer and circuit board technologies due to their unique capacity to integrate with soft materials and curvilinear surfaces. The range of possibilities is predicated on the development of device architectures that simultaneously offer advanced electronic function and compliant mechanics. Here we report that thin films of hard electronic materials patterned in deterministic fractal motifs and bonded to elastomers enable unusual mechanics with important implications in stretchable device design. In particular, we demonstrate the utility of Peano, Greek cross, Vicsek and other fractal constructs to yield space-filling structures of electronic materials, including monocrystalline silicon, for electrophysiological sensors, precision monitors and actuators, and radio frequency antennas. These devices support conformal mounting on the skin and have unique properties such as invisibility under magnetic resonance imaging. The results suggest that fractal-based layouts represent important strategies for hard-soft materials integration.
Fractal design concepts for stretchable electronics
NASA Astrophysics Data System (ADS)
Fan, Jonathan A.; Yeo, Woon-Hong; Su, Yewang; Hattori, Yoshiaki; Lee, Woosik; Jung, Sung-Young; Zhang, Yihui; Liu, Zhuangjian; Cheng, Huanyu; Falgout, Leo; Bajema, Mike; Coleman, Todd; Gregoire, Dan; Larsen, Ryan J.; Huang, Yonggang; Rogers, John A.
2014-02-01
Stretchable electronics provide a foundation for applications that exceed the scope of conventional wafer and circuit board technologies due to their unique capacity to integrate with soft materials and curvilinear surfaces. The range of possibilities is predicated on the development of device architectures that simultaneously offer advanced electronic function and compliant mechanics. Here we report that thin films of hard electronic materials patterned in deterministic fractal motifs and bonded to elastomers enable unusual mechanics with important implications in stretchable device design. In particular, we demonstrate the utility of Peano, Greek cross, Vicsek and other fractal constructs to yield space-filling structures of electronic materials, including monocrystalline silicon, for electrophysiological sensors, precision monitors and actuators, and radio frequency antennas. These devices support conformal mounting on the skin and have unique properties such as invisibility under magnetic resonance imaging. The results suggest that fractal-based layouts represent important strategies for hard-soft materials integration.
A Digital Camcorder Image Stabilizer Based on Gray Coded Bit-Plane Block Matching
2000-07-01
Matching", IEEE 1998. [9] Sung- Hee Lee, Seung -Won Jeon, Eui-Sung Kang and Sung-Jea Ko, "Fast Digital Stabilizer based on Gray Coded Bit- Plane Matching...Jea Ko and Sung- Hee Lee adopted bit-plane or gray-coded bit-plane block matching to greatly reduce the computational complexity. However, their...Yong Chul Park, and Dong Wook Kim , "An Adaptive Motion Decision System for Digital Image Stabilizer Based on Edge Pattern Matching", IEEE Trans. on
In Vivo Imaging Reveals Composite Coding for Diagonal Motion in the Drosophila Visual System
Zhou, Wei; Chang, Jin
2016-01-01
Understanding information coding is important for resolving the functions of visual neural circuits. The motion vision system is a classic model for studying information coding as it contains a concise and complete information-processing circuit. In Drosophila, the axon terminals of motion-detection neurons (T4 and T5) project to the lobula plate, which comprises four regions that respond to the four cardinal directions of motion. The lobula plate thus represents a topographic map on a transverse plane. This enables us to study the coding of diagonal motion by investigating its response pattern. By using in vivo two-photon calcium imaging, we found that the axon terminals of T4 and T5 cells in the lobula plate were activated during diagonal motion. Further experiments showed that the response to diagonal motion is distributed over the following two regions compared to the cardinal directions of motion—a diagonal motion selective response region and a non-selective response region—which overlap with the response regions of the two vector-correlated cardinal directions of motion. Interestingly, the sizes of the non-selective response regions are linearly correlated with the angle of the diagonal motion. These results revealed that the Drosophila visual system employs a composite coding for diagonal motion that includes both independent coding and vector decomposition coding. PMID:27695103
Fractal characteristics for binary noise radar waveform
NASA Astrophysics Data System (ADS)
Li, Bing C.
2016-05-01
Noise radars have many advantages over conventional radars and receive great attentions recently. The performance of a noise radar is determined by its waveforms. Investigating characteristics of noise radar waveforms has significant value for evaluating noise radar performance. In this paper, we use binomial distribution theory to analyze general characteristics of binary phase coded (BPC) noise waveforms. Focusing on aperiodic autocorrelation function, we demonstrate that the probability distributions of sidelobes for a BPC noise waveform depend on the distances of these sidelobes to the mainlobe. The closer a sidelobe to the mainlobe, the higher the probability for this sidelobe to be a maximum sidelobe. We also develop Monte Carlo framework to explore the characteristics that are difficult to investigate analytically. Through Monte Carlo experiments, we reveal the Fractal relationship between the code length and the maximum sidelobe value for BPC waveforms, and propose using fractal dimension to measure noise waveform performance.
Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images
Haponen, Markus; Rasku, Jyrki
2016-01-01
The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC) colony images can be classified using error-correcting output codes (ECOC). Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good) which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN) searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT) based features in classification. The best accuracy (62.4%) is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting). The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem. PMID:27847810
Optimised Post-Exposure Image Sharpening Code for L3-CCD Detectors
Harding, Leon K.; Butler, Raymond F.; Redfern, R. Michael; Sheehan, Brendan J.; McDonald, James
2008-02-22
As light from celestial bodies traverses Earth's atmosphere, the wavefronts are distorted by atmospheric turbulence, thereby lowering the angular resolution of ground-based imaging. Rapid time-series imaging enables Post-Exposure Image Sharpening (PEIS) techniques, which employ shift-and-add frame registration to remove the tip-tilt component of the wavefront error--as well as telescope wobble, thus benefiting all observations. Further resolution gains are possible by selecting only frames with the best instantaneous seeing--a technique sometimes calling 'Lucky Imaging'. We implemented these techniques in the 1990s, with the TRIFFID imaging photon-counting camera, and its associated data reduction software. The software was originally written for time-tagged photon-list data formats, recorded by detectors such as the MAMA. This paper describes our deep re-structuring of the software to handle the 2-d FITS images produced by Low Light Level CCD (L3-CCD) cameras, which have sufficient time-series resolution (>30 Hz) for PEIS. As before, our code can perform straight frame co-addition, use composite reference stars, perform PEIS under several different algorithms to determine the tip/tilt shifts, store 'quality' and shift information for each frame, perform frame selection, and generate exposure-maps for photometric correction. In addition, new code modules apply all 'static' calibrations (bias subtraction, dark subtraction and flat-fielding) to the frames immediately prior to the other algorithms. A unique feature of our PEIS/Lucky Imaging code is the use of bidirectional wiener-filtering. Coupled with the far higher sensitivity of the L3-CCD over the previous TRIFFID detectors, much fainter reference stars and much narrower time windows can be used.
The fractal structure of the mitochondrial genomes
NASA Astrophysics Data System (ADS)
Oiwa, Nestor N.; Glazier, James A.
2002-08-01
The mitochondrial DNA genome has a definite multifractal structure. We show that loops, hairpins and inverted palindromes are responsible for this self-similarity. We can thus establish a definite relation between the function of subsequences and their fractal dimension. Intriguingly, protein coding DNAs also exhibit palindromic structures, although they do not appear in the sequence of amino acids. These structures may reflect the stabilization and transcriptional control of DNA or the control of posttranscriptional editing of mRNA.
Filter-Based Coded-Excitation System for High-Speed Ultrasonic Imaging
Shen, Jian
2010-01-01
We have recently presented a new algorithm for high-speed parallel processing of ultrasound pulse-echo data for real-time three-dimensional (3-D) imaging. The approach utilizes a discretized linear model of the echo data received from the region of interest (ROI) using a conventional beam former. The transmitter array elements are fed with binary codes designed to produce distinct impulse responses from different directions in ROI. Image reconstruction in ROI is achieved with a regularized pseudoinverse operator derived from the linear receive signal model. The reconstruction operator can be implemented using a transversal filter bank with every filter in the bank designed to extract echoes from a specific direction in the ROI. The number of filters in the bank determines the number of image lines acquired simultaneously. In this paper, we present images of a cyst phantom reconstructed based on our formulation. A number of issues of practical significance in image reconstruction are addressed. Specifically, an augmented model is introduced to account for imperfect blocking of echoes from outside the ROI. We have also introduced a column-weighting algorithm for minimizing the number of filter coefficients. In addition, a detailed illustration of a full image reconstruction using subimage acquisition and compounding is given. Experimental results have shown that the new approach is valid for phased-array pulse-echo imaging of speckle-generating phantoms typically used in characterizing medical imaging systems. Such coded-excitation-based image reconstruction from speckle-generating phantoms, to the best of our knowledge, have not been reported previously. PMID:10048849
Measurements with Pinhole and Coded Aperture Gamma-Ray Imaging Systems
Raffo-Caiado, Ana Claudia; Solodov, Alexander A; Abdul-Jabbar, Najeb M; Hayward, Jason P; Ziock, Klaus-Peter
2010-01-01
From a safeguards perspective, gamma-ray imaging has the potential to reduce manpower and cost for effectively locating and monitoring special nuclear material. The purpose of this project was to investigate the performance of pinhole and coded aperture gamma-ray imaging systems at Oak Ridge National Laboratory (ORNL). With the aid of the European Commission Joint Research Centre (JRC), radiometric data will be combined with scans from a three-dimensional design information verification (3D-DIV) system. Measurements were performed at the ORNL Safeguards Laboratory using sources that model holdup in radiological facilities. They showed that for situations with moderate amounts of solid or dense U sources, the coded aperture was able to predict source location and geometry within ~7% of actual values while the pinhole gave a broad representation of source distributions
Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens
2013-01-01
The limitations in neutron flux and resolution (L/D) of current neutron imaging systems can be addressed with a Coded Source Imaging system with magnification (xCSI). More precisely, the multiple sources in an xCSI system can exceed the flux of a single pinhole system for several orders of magnitude, while maintaining a higher L/D with the small sources. Moreover, designing for an xCSI system reduces noise from neutron scattering, because the object is placed away from the detector to achieve magnification. However, xCSI systems are adversely affected by correlated noise such as non-uniform illumination of the neutron source, incorrect sampling of the coded radiograph, misalignment of the coded masks, mask transparency, and the imperfection of the system Point Spread Function (PSF). We argue that a model-based reconstruction algorithm can overcome these problems and describe the implementation of a Simultaneous Iterative Reconstruction Technique algorithm for coded sources. Design pitfalls that preclude a satisfactory reconstruction are documented.
An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images
Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush
2009-01-01
A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770
High speed, low-complexity image coding for IP-transport with JPEG XS
NASA Astrophysics Data System (ADS)
Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Descampe, Antonin
2016-09-01
The JPEG committee (formally, ISO/IEC SC 29 WG 01) is currently investigating a new work item on near lossless low complexity coding for IP streaming of moving images. This article discusses the requirements and use cases of this work item, gives some insight into the anchors that are used for the purpose of standardization, and provides a short update on the current proposals that reached the committee.
Coded aperture imaging with self-supporting uniformly redundant arrays. [Patent application
Fenimore, E.E.
1980-09-26
A self-supporting uniformly redundant array pattern for coded aperture imaging. The invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput.
LineCast: line-based distributed coding and transmission for broadcasting satellite images.
Wu, Feng; Peng, Xiulian; Xu, Jizheng
2014-03-01
In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction.
Image sensor system with bio-inspired efficient coding and adaptation.
Okuno, Hirotsugu; Yagi, Tetsuya
2012-08-01
We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen
2016-09-01
In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.
Fractal Patterns and Chaos Games
ERIC Educational Resources Information Center
Devaney, Robert L.
2004-01-01
Teachers incorporate the chaos game and the concept of a fractal into various areas of the algebra and geometry curriculum. The chaos game approach to fractals provides teachers with an opportunity to help students comprehend the geometry of affine transformations.
Building Fractal Models with Manipulatives.
ERIC Educational Resources Information Center
Coes, Loring
1993-01-01
Uses manipulative materials to build and examine geometric models that simulate the self-similarity properties of fractals. Examples are discussed in two dimensions, three dimensions, and the fractal dimension. Discusses how models can be misleading. (Contains 10 references.) (MDH)
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
Efficient postprocessing scheme for block-coded images based on multiscale edge detection
NASA Astrophysics Data System (ADS)
Wu, Shuanhu; Yan, Hong; Tan, Zheng
2001-09-01
The block discrete cosine transform (BDCT) is the most widely used technique for the compression of both still and moving images, a major problem related with the BDCT techniques is that the decoded images, especially at low bit rate, exhibit visually annoying blocking effects. In this paper, based on Mallets multiscale edge detection, we proposed an efficient deblocking algorithm to further improved the coding performance. The advantage of our algorithm is that it can efficiently preserve texture structure in the original decompressed images. Our method is similar to that of Z. Xiong's, where the Z.Xiong's method is not suitable for images with a large portion of texture; for instance, the Barbara Image. The difference of our method and the Z.Xiong's is that our method adopted a new thresholding scheme for multi-scale edge detection instead of exploiting cross-scale correlation for edge detection. Numerical experiment results show that our scheme not only outperforms Z.Xiong's for various images in the case of the same computational complexity, but also preserve texture structure in the decompressed images at the same time. Compared with the best iterative-based method (POCS) reported in the literature, our algorithm can achieve the same peak signal-to-noise ratio (PSNR) improvement and give visually very pleasing images as well.
NASA Astrophysics Data System (ADS)
Oleshko, Klaudia; de Jesús Correa López, María; Romero, Alejandro; Ramírez, Victor; Pérez, Olga
2016-04-01
The effectiveness of fractal toolbox to capture the scaling or fractal probability distribution, and simply fractal statistics of main hydrocarbon reservoir attributes, was highlighted by Mandelbrot (1995) and confirmed by several researchers (Zhao et al., 2015). Notwithstanding, after more than twenty years, it's still common the opinion that fractals are not useful for the petroleum engineers and especially for Geoengineering (Corbett, 2012). In spite of this negative background, we have successfully applied the fractal and multifractal techniques to our project entitled "Petroleum Reservoir as a Fractal Reactor" (2013 up to now). The distinguishable feature of Fractal Reservoir is the irregular shapes and rough pore/solid distributions (Siler, 2007), observed across a broad range of scales (from SEM to seismic). At the beginning, we have accomplished the detailed analysis of Nelson and Kibler (2003) Catalog of Porosity and Permeability, created for the core plugs of siliciclastic rocks (around ten thousand data were compared). We enriched this Catalog by more than two thousand data extracted from the last ten years publications on PoroPerm (Corbett, 2012) in carbonates deposits, as well as by our own data from one of the PEMEX, Mexico, oil fields. The strong power law scaling behavior was documented for the major part of these data from the geological deposits of contrasting genesis. Based on these results and taking into account the basic principles and models of the Physics of Fractals, introduced by Per Back and Kan Chen (1989), we have developed new software (Muukíl Kaab), useful to process the multiscale geological and geophysical information and to integrate the static geological and petrophysical reservoir models to dynamic ones. The new type of fractal numerical model with dynamical power law relations among the shapes and sizes of mesh' cells was designed and calibrated in the studied area. The statistically sound power law relations were established
Fractal dynamics of earthquakes
Bak, P.; Chen, K.
1995-05-01
Many objects in nature, from mountain landscapes to electrical breakdown and turbulence, have a self-similar fractal spatial structure. It seems obvious that to understand the origin of self-similar structures, one must understand the nature of the dynamical processes that created them: temporal and spatial properties must necessarily be completely interwoven. This is particularly true for earthquakes, which have a variety of fractal aspects. The distribution of energy released during earthquakes is given by the Gutenberg-Richter power law. The distribution of epicenters appears to be fractal with dimension D {approx} 1--1.3. The number of after shocks decay as a function of time according to the Omori power law. There have been several attempts to explain the Gutenberg-Richter law by starting from a fractal distribution of faults or stresses. But this is a hen-and-egg approach: to explain the Gutenberg-Richter law, one assumes the existence of another power-law--the fractal distribution. The authors present results of a simple stick slip model of earthquakes, which evolves to a self-organized critical state. Emphasis is on demonstrating that empirical power laws for earthquakes indicate that the Earth`s crust is at the critical state, with no typical time, space, or energy scale. Of course the model is tremendously oversimplified; however in analogy with equilibrium phenomena they do not expect criticality to depend on details of the model (universality).
NASA Astrophysics Data System (ADS)
Marks-Tarlow, Terry
Linear concepts of time plus the modern capacity to track history emerged out of circular conceptions characteristic of ancient and traditional cultures. A fractal concept of time lies implicitly within the analog clock, where each moment is treated as unique. With fractal geometry the best descriptor of nature, qualities of self-similarity and scale invariance easily model her endless variety and recursive patterning, both in time and across space. To better manage temporal aspects of our lives, a fractal concept of time is non-reductive, based more on the fullness of being than on fragments of doing. By using a fractal concept of time, each activity or dimension of life is multiply and vertically nested. Each nested cycle remains simultaneously present, operating according to intrinsic dynamics and time scales. By adding the vertical axis of simultaneity to the horizontal axis of length, time is already full and never needs to be filled. To attend to time's vertical dimension is to tap into the imaginary potential for infinite depth. To switch from linear to fractal time allows us to relax into each moment while keeping in mind the whole.
Pond fractals in a tidal flat.
Cael, B B; Lambert, Bennett; Bisson, Kelsey
2015-11-01
Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.
A tale of two fractals: The Hofstadter butterfly and the integral Apollonian gaskets
NASA Astrophysics Data System (ADS)
Satija, Indubala I.
2016-11-01
This paper unveils a mapping between a quantum fractal that describes a physical phenomena, and an abstract geometrical fractal. The quantum fractal is the Hofstadter butterfly discovered in 1976 in an iconic condensed matter problem of electrons moving in a two-dimensional lattice in a transverse magnetic field. The geometric fractal is the integer Apollonian gasket characterized in terms of a 300 BC problem of mutually tangent circles. Both of these fractals are made up of integers. In the Hofstadter butterfly, these integers encode the topological quantum numbers of quantum Hall conductivity. In the Apollonian gaskets an infinite number of mutually tangent circles are nested inside each other, where each circle has integer curvature. The mapping between these two fractals reveals a hidden D3 symmetry embedded in the kaleidoscopic images that describe the asymptotic scaling properties of the butterfly. This paper also serves as a mini review of these fractals, emphasizing their hierarchical aspects in terms of Farey fractions.
[Recent progress of research and applications of fractal and its theories in medicine].
Cai, Congbo; Wang, Ping
2014-10-01
Fractal, a mathematics concept, is used to describe an image of self-similarity and scale invariance. Some organisms have been discovered with the fractal characteristics, such as cerebral cortex surface, retinal vessel structure, cardiovascular network, and trabecular bone, etc. It has been preliminarily confirmed that the three-dimensional structure of cells cultured in vitro could be significantly enhanced by bionic fractal surface. Moreover, fractal theory in clinical research will help early diagnosis and treatment of diseases, reducing the patient's pain and suffering. The development process of diseases in the human body can be expressed by the fractal theories parameter. It is of considerable significance to retrospectively review the preparation and application of fractal surface and its diagnostic value in medicine. This paper gives an application of fractal and its theories in the medical science, based on the research achievements in our laboratory.
Fractal radar scattering from soil.
Oleschko, Klaudia; Korvin, Gabor; Figueroa, Benjamin; Vuelvas, Marco Antonio; Balankin, Alexander S; Flores, Lourdes; Carreón, Dora
2003-04-01
A general technique is developed to retrieve the fractal dimension of self-similar soils through microwave (radar) scattering. The technique is based on a mathematical model relating the fractal dimensions of the georadargram to that of the scattering structure. Clear and different fractal signatures have been observed over four geosystems (soils and sediments) compared in this work.
SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors
Joshi, S; Kaye, W; Jaworski, J; He, Z
2015-06-15
Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinhole camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for
Design and implementation of a Cooke triplet based wave-front coded super-resolution imaging system
NASA Astrophysics Data System (ADS)
Zhao, Hui; Wei, Jingxuan
2015-09-01
Wave-front coding is a powerful technique that could be used to extend the DOF (depth of focus) of incoherent imaging system. It is the suitably designed phase mask that makes the system defocus invariant and it is the de-convolution algorithm that generates the clear image with large DOF. Compared with the traditional imaging system, the point spread function (PSF) in wave-front coded imaging system has quite a large support size and this characteristic makes wave-front coding be capable of realizing super-resolution imaging without replacing the current sensor with one of smaller pitch size. An amplification based single image super-resolution reconstruction procedure has been specifically designed for wave-front coded imaging system and its effectiveness has been demonstrated experimentally. A Cooke Triplet based wave-front coded imaging system is established. For a focal length of 50 mm and f-number 4.5, objects within the range [5 m, ∞] could be clearly imaged, which indicates a DOF extension ratio of approximately 20. At the same time, the proposed processing procedure could produce at least 3× resolution improvement, with the quality of the reconstructed super-resolution image approaching the diffraction limit.
Tong, Tong; Wolz, Robin; Coupé, Pierrick; Hajnal, Joseph V; Rueckert, Daniel
2013-08-01
We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labeling approaches that rely on comparing image similarities between atlases and target images. In addition, we propose a Fixed Discriminative Dictionary Learning for Segmentation (F-DDLS) strategy, which can learn dictionaries offline and perform segmentations online, enabling a significant speed-up in the segmentation stage. The proposed method has been evaluated for the hippocampus segmentation of 80 healthy ICBM subjects and 202 ADNI images. The robustness of the proposed method, especially of our F-DDLS strategy, was validated by training and testing on different subject groups in the ADNI database. The influence of different parameters was studied and the performance of the proposed method was also compared with that of the nonlocal patch-based approach. The proposed method achieved a median Dice coefficient of 0.879 on 202 ADNI images and 0.890 on 80 ICBM subjects, which is competitive compared with state-of-the-art methods.
Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.
Dos Santos, Serge; Prevorovsky, Zdenek
2011-08-01
Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health.
Embedded morphological dilation coding for 2D and 3D images
NASA Astrophysics Data System (ADS)
Lazzaroni, Fabio; Signoroni, Alberto; Leonardi, Riccardo
2002-01-01
Current wavelet-based image coders obtain high performance thanks to the identification and the exploitation of the statistical properties of natural images in the transformed domain. Zerotree-based algorithms, as Embedded Zerotree Wavelets (EZW) and Set Partitioning In Hierarchical Trees (SPIHT), offer high Rate-Distortion (RD) coding performance and low computational complexity by exploiting statistical dependencies among insignificant coefficients on hierarchical subband structures. Another possible approach tries to predict the clusters of significant coefficients by means of some form of morphological dilation. An example of a morphology-based coder is the Significance-Linked Connected Component Analysis (SLCCA) that has shown performance which are comparable to the zerotree-based coders but is not embedded. A new embedded bit-plane coder is proposed here based on morphological dilation of significant coefficients and context based arithmetic coding. The algorithm is able to exploit both intra-band and inter-band statistical dependencies among wavelet significant coefficients. Moreover, the same approach is used both for two and three-dimensional wavelet-based image compression. Finally we the algorithms are tested on some 2D images and on a medical volume, by comparing the RD results to those obtained with the state-of-the-art wavelet-based coders.
Backbone fractal dimension and fractal hybrid orbital of protein structure
NASA Astrophysics Data System (ADS)
Peng, Xin; Qi, Wei; Wang, Mengfan; Su, Rongxin; He, Zhimin
2013-12-01
Fractal geometry analysis provides a useful and desirable tool to characterize the configuration and structure of proteins. In this paper we examined the fractal properties of 750 folded proteins from four different structural classes, namely (1) the α-class (dominated by α-helices), (2) the β-class (dominated by β-pleated sheets), (3) the (α/β)-class (α-helices and β-sheets alternately mixed) and (4) the (α + β)-class (α-helices and β-sheets largely segregated) by using two fractal dimension methods, i.e. "the local fractal dimension" and "the backbone fractal dimension" (a new and useful quantitative parameter). The results showed that the protein molecules exhibit a fractal behavior in the range of 1 ⩽ N ⩽ 15 (N is the number of the interval between two adjacent amino acid residues), and the value of backbone fractal dimension is distinctly greater than that of local fractal dimension for the same protein. The average value of two fractal dimensions decreased in order of α > α/β > α + β > β. Moreover, the mathematical formula for the hybrid orbital model of protein based on the concept of backbone fractal dimension is in good coincidence with that of the similarity dimension. So it is a very accurate and simple method to analyze the hybrid orbital model of protein by using the backbone fractal dimension.
The Classification of HEp-2 Cell Patterns Using Fractal Descriptor.
Xu, Rudan; Sun, Yuanyuan; Yang, Zhihao; Song, Bo; Hu, Xiaopeng
2015-07-01
Indirect immunofluorescence (IIF) with HEp-2 cells is considered as a powerful, sensitive and comprehensive technique for analyzing antinuclear autoantibodies (ANAs). The automatic classification of the HEp-2 cell images from IIF has played an important role in diagnosis. Fractal dimension can be used on the analysis of image representing and also on the property quantification like texture complexity and spatial occupation. In this study, we apply the fractal theory in the application of HEp-2 cell staining pattern classification, utilizing fractal descriptor firstly in the HEp-2 cell pattern classification with the help of morphological descriptor and pixel difference descriptor. The method is applied to the data set of MIVIA and uses the support vector machine (SVM) classifier. Experimental results show that the fractal descriptor combining with morphological descriptor and pixel difference descriptor makes the precisions of six patterns more stable, all above 50%, achieving 67.17% overall accuracy at best with relatively simple feature vectors.
Shekhar, Himanshu; Doyley, Marvin M.
2012-01-01
Purpose: Subharmonic intravascular ultrasound imaging (S-IVUS) could visualize the adventitial vasa vasorum, but the high pressure threshold required to incite subharmonic behavior in an ultrasound contrast agent will compromise sensitivity—a trait that has hampered the clinical use of S-IVUS. The purpose of this study was to assess the feasibility of using coded-chirp excitations to improve the sensitivity and axial resolution of S-IVUS. Methods: The subharmonic response of Targestar-pTM, a commercial microbubble ultrasound contrast agent (UCA), to coded-chirp (5%–20% fractional bandwidth) pulses and narrowband sine-burst (4% fractional bandwidth) pulses was assessed, first using computer simulations and then experimentally. Rectangular windowed excitation pulses with pulse durations ranging from 0.25 to 3 μs were used in all studies. All experimental studies were performed with a pair of transducers (20 MHz/10 MHz), both with diameter of 6.35 mm and focal length of 50 mm. The size distribution of the UCA was measured with a CasyTM Cell counter. Results: The simulation predicted a pressure threshold that was an order of magnitude higher than that determined experimentally. However, all other predictions were consistent with the experimental observations. It was predicted that: (1) exciting the agent with chirps would produce stronger subharmonic response relative to those produced by sine-bursts; (2) increasing the fractional bandwidth of coded-chirp excitation would increase the sensitivity of subharmonic imaging; and (3) coded-chirp would increase axial resolution. The experimental results revealed that subharmonic-to-fundamental ratios obtained with chirps were 5.7 dB higher than those produced with sine-bursts of similar duration. The axial resolution achieved with 20% fractional bandwidth chirps was approximately twice that achieved with 4% fractional bandwidth sine-bursts. Conclusions: The coded-chirp method is a suitable excitation strategy for
The laser linewidth effect on the image quality of phase coded synthetic aperture ladar
NASA Astrophysics Data System (ADS)
Cai, Guangyu; Hou, Peipei; Ma, Xiaoping; Sun, Jianfeng; Zhang, Ning; Li, Guangyuan; Zhang, Guo; Liu, Liren
2015-12-01
The phase coded (PC) waveform in synthetic aperture ladar (SAL) outperforms linear frequency modulated (LFM) signal in lower side lobe, shorter pulse duration and making the rigid control of the chirp starting point in every pulse unnecessary. Inherited from radar PC waveform and strip map SAL, the backscattered signal of a point target in PC SAL was listed and the two dimensional match filtering algorithm was introduced to focus a point image. As an inherent property of laser, linewidth is always detrimental to coherent ladar imaging. With the widely adopted laser linewidth model, the effect of laser linewidth on SAL image quality was theoretically analyzed and examined via Monte Carlo simulation. The research gives us a clear view of how to select linewidth parameters in the future PC SAL systems.
Biconjugate gradient stabilized method in image deconvolution of a wavefront coding system
NASA Astrophysics Data System (ADS)
Liu, Peng; Liu, Qin-xiao; Zhao, Ting-yu; Chen, Yan-ping; Yu, Fei-hong
2013-04-01
The point spread function (PSF) is a non-rotational symmetric for the wavefront coding (WFC) system with a cubic phase mask (CPM). Antireflective boundary conditions (BCs) are used to eliminate the ringing effect on the border and vibration on the edge of the image. The Kronecker product approximation is used to reduce the computation consumption. The image-formation process of the WFC system is transformed into a matrix equation. In order to save storage space, biconjugate gradient (Bi-CG) and biconjugate gradient stabilized (Bi-CGSTAB) methods are used to solve the asymmetric matrix equation, which is a typical iteration algorithm of the Krylov subspace using the two-side Lanczos process. Simulation and experimental results illustrate the efficiency of the proposed algorithm for the image deconvolution. The result based on the Bi-CGSTAB method is smoother than the classic Wiener filter, while preserving more details than the Truncated Singular Value Decomposition (TSVD) method.
Image coding using parallel implementations of the embedded zerotree wavelet algorithm
NASA Astrophysics Data System (ADS)
Creusere, Charles D.
1996-03-01
We explore here the implementation of Shapiro's embedded zerotree wavelet (EZW) image coding algorithms on an array of parallel processors. To this end, we first consider the problem of parallelizing the basic wavelet transform, discussing past work in this area and the compatibility of that work with the zerotree coding process. From this discussion, we present a parallel partitioning of the transform which is computationally efficient and which allows the wavelet coefficients to be coded with little or no additional inter-processor communication. The key to achieving low data dependence between the processors is to ensure that each processor contains only entire zerotrees of wavelet coefficients after the decomposition is complete. We next quantify the rate-distortion tradeoffs associated with different levels of parallelization for a few variations of the basic coding algorithm. Studying these results, we conclude that the quality of the coder decreases as the number of parallel processors used to implement it increases. Noting that the performance of the parallel algorithm might be unacceptably poor for large processor arrays, we also develop an alternate algorithm which always achieves the same rate-distortion performance as the original sequential EZW algorithm at the cost of higher complexity and reduced scalability.
Seenivasagam, V; Velumani, R
2013-01-01
Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.
Application of wavelets to image coding in an rf-link communication system
NASA Astrophysics Data System (ADS)
Liou, C. S. J.; Conners, Gary H.; Muczynski, Joe
1995-04-01
The joint University of Rochester/Rochester Institute of Technology `Center for Electronic Imaging Systems' (CEIS) is designed to focus on research problems of interest to industrial sponsors, especially the Rochester Imaging Consortium. Compression of tactical images for transmission over an rf link is an example of this type of research project which is being worked on in collaboration with one of the CEIS sponsors, Harris Corporation/RF Communications. The Harris digital video imagery transmission system (DVITS) is designed to fulfill the need to transmit secure imagery between unwired locations at real-time rates. DVITS specializes in transmission systems for users who rely on hf equipment operating at the low end of the frequency spectrum. However, the inherently low bandwidth of hf combined with transmission characteristics such as fading and dropout severely restrict the effective throughput. The problem at designing a system such as DVITS is particularly challenging because of bandwidth and signal/noise limitations, and because of the dynamic nature of the operational environment. In this paper, a novel application of wavelets in tactical image coding is proposed to replace the current DCT compression algorithm in the DVITS system. THe effects of channel noise on the received image are determined and various design strategies combining image segmentation, compression, and error correction are described.
A real-time chirp-coded imaging system with tissue attenuation compensation.
Ramalli, A; Guidi, F; Boni, E; Tortoli, P
2015-07-01
In ultrasound imaging, pulse compression methods based on the transmission (TX) of long coded pulses and matched receive filtering can be used to improve the penetration depth while preserving the axial resolution (coded-imaging). The performance of most of these methods is affected by the frequency dependent attenuation of tissue, which causes mismatch of the receiver filter. This, together with the involved additional computational load, has probably so far limited the implementation of pulse compression methods in real-time imaging systems. In this paper, a real-time low-computational-cost coded-imaging system operating on the beamformed and demodulated data received by a linear array probe is presented. The system has been implemented by extending the firmware and the software of the ULA-OP research platform. In particular, pulse compression is performed by exploiting the computational resources of a single digital signal processor. Each image line is produced in less than 20 μs, so that, e.g., 192-line frames can be generated at up to 200 fps. Although the system may work with a large class of codes, this paper has been focused on the test of linear frequency modulated chirps. The new system has been used to experimentally investigate the effects of tissue attenuation so that the design of the receive compression filter can be accordingly guided. Tests made with different chirp signals confirm that, although the attainable compression gain in attenuating media is lower than the theoretical value expected for a given TX Time-Bandwidth product (BT), good SNR gains can be obtained. For example, by using a chirp signal having BT=19, a 13 dB compression gain has been measured. By adapting the frequency band of the receiver to the band of the received echo, the signal-to-noise ratio and the penetration depth have been further increased, as shown by real-time tests conducted on phantoms and in vivo. In particular, a 2.7 dB SNR increase has been measured through a
Inferential multi-spectral image compression based on distributed source coding
NASA Astrophysics Data System (ADS)
Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang
2008-08-01
Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.
Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.
Korecki, P; Roszczynialski, T P; Sowa, K M
2015-04-06
In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.
Fractal Images: Procedure and Theory.
1987-08-01
the group of one-on-one analytic map- pings is the group of Moebius transforms, that is, maps of the form -az + b g(z) cz + d ad - bc * 0cz Moebius ...powerful. Let h(z) = Az2 + 2Bz + C be a general quadratic equation in C. For f(z) as above, it is possible to solve for a Moebius transform g(z) and a...given by g(z) = Az + B I = AC + B -B 2 Since two dynamical systems in C conjugate by a Moebius transform are the same, every quadratic dynamical system
Hsü, K J; Hsü, A J
1990-01-01
Music critics have compared Bach's music to the precision of mathematics. What "mathematics" and what "precision" are the questions for a curious scientist. The purpose of this short note is to suggest that the mathematics is, at least in part, Mandelbrot's fractal geometry and the precision is the deviation from a log-log linear plot. PMID:11607061
ERIC Educational Resources Information Center
Camp, Dane R.
1991-01-01
After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…
ERIC Educational Resources Information Center
Marks, Tim K.
1992-01-01
Presents a three-lesson unit that uses fractal geometry to measure the coastline of Massachusetts. Two lessons provide hands-on activities utilizing compass and grid methods to perform the measurements and the third lesson analyzes and explains the results of the activities. (MDH)
A Fractal Perspective on Scale in Geography
NASA Astrophysics Data System (ADS)
Jiang, Bin; Brandt, S.
2016-06-01
Scale is a fundamental concept that has attracted persistent attention in geography literature over the past several decades. However, it creates enormous confusion and frustration, particularly in the context of geographic information science, because of scale-related issues such as image resolution, and the modifiable areal unit problem (MAUP). This paper argues that the confusion and frustration mainly arise from Euclidean geometric thinking, with which locations, directions, and sizes are considered absolute, and it is time to reverse this conventional thinking. Hence, we review fractal geometry, together with its underlying way of thinking, and compare it to Euclidean geometry. Under the paradigm of Euclidean geometry, everything is measurable, no matter how big or small. However, geographic features, due to their fractal nature, are essentially unmeasurable or their sizes depend on scale. For example, the length of a coastline, the area of a lake, and the slope of a topographic surface are all scale-dependent. Seen from the perspective of fractal geometry, many scale issues, such as the MAUP, are inevitable. They appear unsolvable, but can be dealt with. To effectively deal with scale-related issues, we introduce topological and scaling analyses based on street-related concepts such as natural streets, street blocks, and natural cities. We further contend that spatial heterogeneity, or the fractal nature of geographic features, is the first and foremost effect of two spatial properties, because it is general and universal across all scales. Keywords: Scaling, spatial heterogeneity, conundrum of length, MAUP, topological analysis
Flames in fractal grid generated turbulence
NASA Astrophysics Data System (ADS)
Goh, K. H. H.; Geipel, P.; Hampp, F.; Lindstedt, R. P.
2013-12-01
Twin premixed turbulent opposed jet flames were stabilized for lean mixtures of air with methane and propane in fractal grid generated turbulence. A density segregation method was applied alongside particle image velocimetry to obtain velocity and scalar statistics. It is shown that the current fractal grids increase the turbulence levels by around a factor of 2. Proper orthogonal decomposition (POD) was applied to show that the fractal grids produce slightly larger turbulent structures that decay at a slower rate as compared to conventional perforated plates. Conditional POD (CPOD) was also implemented using the density segregation technique and the results show that CPOD is essential to segregate the relative structures and turbulent kinetic energy distributions in each stream. The Kolmogorov length scales were also estimated providing values ∼0.1 and ∼0.5 mm in the reactants and products, respectively. Resolved profiles of flame surface density indicate that a thin flame assumption leading to bimodal statistics is not perfectly valid under the current conditions and it is expected that the data obtained will be of significant value to the development of computational methods that can provide information on the conditional structure of turbulence. It is concluded that the increase in the turbulent Reynolds number is without any negative impact on other parameters and that fractal grids provide a route towards removing the classical problem of a relatively low ratio of turbulent to bulk strain associated with the opposed jet configuration.
NASA Astrophysics Data System (ADS)
Maslovskaya, A. G.; Barabash, T. K.
2017-01-01
The article presents some results of fractal analysis of ferroelectric domain structure images visualized with scanning electron microscope (SEM) techniques. The fractal and multifractal characteristics were estimated to demonstrate self-similar organization of ferroelectric domain structure registered with static and dynamic contrast modes of SEM. Fractal methods as sensitive analytical tools were used to indicate degree of domain structure and domain boundary imperfections. The electron irradiation-induced erosion effect of ferroelectric domain boundaries in electron beam-stimulated polarization current mode of SEM is characterized by considerable raising of fractal dimension. For dynamic contrast mode of SEM there was revealed that complication of domain structure during its dynamics is specified by increase in fractal dimension of images and slight raising of boundary fractal dimension.
Biophilic fractals and the visual journey of organic screen-savers.
Taylor, R P; Sprott, J C
2008-01-01
Computers have led to the remarkable popularity of mathematically-generated fractal patterns. Fractals have also assumed a rapidly expanding role as an art form. Due to their growing impact on cultures around the world and their prevalence in nature, fractals constitute a central feature of our daily visual experiences throughout our lives. This intimate association raises a crucial question - does exposure to fractals have a positive impact on our mental and physical condition? This question raises the opportunity for readers of this journal to have some visual fun. Each year a different nonlinear inspired artist is featured on the front cover of the journal. This year, Scott Draves's fractal art works continues this tradition. In May 2007, we selected twenty of Draves's artworks and invited readers to vote for their favorites from this selection. The most popular images will feature on the front covers this year. In this article, we discuss fractal aesthetics and Draves's remarkable images.
Coded Mask Imaging of High Energy X-rays with CZT Detectors
NASA Astrophysics Data System (ADS)
Matteson, J. L.; Dowkontt, P. F.; Duttweiler, F.; Heindl, W. A.; Hink, P. L.; Huszar, G. L.; Kalemci, E.; Leblanc, P. C.; Rothschild, R. E.; Skelton, R. T.; Slavis, K. R.; Stephan, E. A.
1998-12-01
Coded mask imagers are appropriate for important objectives of high energy X-ray astronomy, e.g., gamma- ray burst localization, all-sky monitors and surveys, and deep surveys of limited regions. We report results from a coded mask imager developed to establish the proof-of-concept for this technique with CZT detectors. The detector is 2 mm thick with orthogonal crossed strip readout and an advanced electrode design to improve the energy resolution. Each detector face has 22 strip electrodes, and the strip pitch and pixel size are 500 microns. ASIC readout is used and the energy resolution varies from 3 to 6 keV FWHM over the 14 to 184 keV keV range. A coded mask with 2 x 2 cycles of a 23 x 23 MURA pattern (860 micron unit cell) was built from 600 micron thick tantalum to provide good X-ray modulation up to 200 keV. The detector, mask, and a tiny Gd-153 source of 41 keV X-rays were positioned with a spacing that caused the mask cells in the shadowgram to have a projected size of 1300 microns at the detector. Multiple detector positions were used to measure the shadowgram of a full mask cycle and this was recorded with 100 percent modulation transfer by the detector, due to its factor of 2.6 oversampling of the mask unit cell, and very high strip-to-strip selectivity and spatial accuracy. Deconvolution of the shadowgram produced a correlation image in which the source was detected as a 76-sigma peak with the correct FWHM and base diameter. Off-source image pixels had gaussian fluctuations that agree closely with the measurement statistics. Off-source image defects such as might be produced by systematic effects were too small to be seen and limited to <0.5 percent of the source peak. These results were obtained with the "raw" shadowgram and image; no "flat fielding" corrections were used.
NASA Astrophysics Data System (ADS)
Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.
2016-11-01
Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.
Voxel-coding method for quantification of vascular structure from 3D images
NASA Astrophysics Data System (ADS)
Soltanian-Zadeh, Hamid; Shahrokni, Ali; Zoroofi, Reza A.
2001-05-01
This paper presents an image processing method for information extraction from 3D images of vasculature. It automates the study of vascular structures by extracting quantitative information such as skeleton, length, diameter, and vessel-to- tissue ratio for different vessels as well as their branches. Furthermore, it generates 3D visualization of vessels based on desired anatomical characteristics such as vessel diameter or 3D connectivity. Steps of the proposed approach are as follows. (1) Preprocessing, in which intensity adjustment, optimal thresholding, and median filtering are done. (2) 3D thinning, in which medial axis and skeleton of the vessels are found. (3) Branch labeling, in which different branches are identified and each voxel is assigned to the corresponding branch. (4) Quantitation, in which length of each branch is estimated, based on the number of voxels assigned to it, and its diameter is calculated using the medial axis direction. (5) Visualization, in which vascular structure is shown in 3D, using color coding and surface rendering methods. We have tested and evaluated the proposed algorithms using simulated images of multi-branch vessels and real confocal microscopic images of the vessels in rat brains. Experimental results illustrate performance of the methods and usefulness of the results for medical image analysis applications.
Rate Distortion Analysis and Bit Allocation Scheme for Wavelet Lifting-Based Multiview Image Coding
NASA Astrophysics Data System (ADS)
Lasang, Pongsak; Kumwilaisak, Wuttipong
2009-12-01
This paper studies the distortion and the model-based bit allocation scheme of wavelet lifting-based multiview image coding. Redundancies among image views are removed by disparity-compensated wavelet lifting (DCWL). The distortion prediction of the low-pass and high-pass subbands of each image view from the DCWL process is analyzed. The derived distortion is used with different rate distortion models in the bit allocation of multiview images. Rate distortion models including power model, exponential model, and the proposed combining the power and exponential models are studied. The proposed rate distortion model exploits the accuracy of both power and exponential models in a wide range of target bit rates. Then, low-pass and high-pass subbands are compressed by SPIHT (Set Partitioning in Hierarchical Trees) with a bit allocation solution. We verify the derived distortion and the bit allocation with several sets of multiview images. The results show that the bit allocation solution based on the derived distortion and our bit allocation scheme provide closer results to those of the exhaustive search method in both allocated bits and peak-signal-to-noise ratio (PSNR). It also outperforms the uniform bit allocation and uniform bit allocation with normalized energy in the order of 1.7-2 and 0.3-1.4 dB, respectively.
Hoffman, Robert M
2016-03-01
Fluorescent proteins are very bright and available in spectrally-distinct colors, enable the imaging of color-coded cancer cells growing in vivo and therefore the distinction of cancer cells with different genetic properties. Non-invasive and intravital imaging of cancer cells with fluorescent proteins allows the visualization of distinct genetic variants of cancer cells down to the cellular level in vivo. Cancer cells with increased or decreased ability to metastasize can be distinguished in vivo. Gene exchange in vivo which enables low metastatic cancer cells to convert to high metastatic can be color-coded imaged in vivo. Cancer stem-like and non-stem cells can be distinguished in vivo by color-coded imaging. These properties also demonstrate the vast superiority of imaging cancer cells in vivo with fluorescent proteins over photon counting of luciferase-labeled cancer cells.
NASA Astrophysics Data System (ADS)
Lang, Haitao; Liu, Liren; Yang, Qingguo
2007-04-01
In this paper, we propose a novel three-dimensional imaging method by which the object is captured by a coded cameras array (CCA) and computationally reconstructed as a series of longitudinal layered surface images of the object. The distribution of cameras in array, named code pattern, is crucial for reconstructed images fidelity when the correlation decoding is used. We use DIRECT global optimization algorithm to design the code patterns that possess proper imaging property. We have conducted primary experiments to verify and test the performance of the proposed method with a simple discontinuous object and a small-scale CCA including nine cameras. After certain procedures such as capturing, photograph integrating, computational reconstructing and filtering, etc., we obtain reconstructed longitudinal layered surface images of the object with higher signal-to-noise ratio. The results of experiments show that the proposed method is feasible. It is a promising method to be used in fields such as remote sensing, machine vision, etc.
Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera
NASA Astrophysics Data System (ADS)
Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu
2016-09-01
We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.
Sparse spike coding : applications of neuroscience to the processing of natural images
NASA Astrophysics Data System (ADS)
Perrinet, Laurent U.
2008-04-01
If modern computers are sometimes superior to cognition in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing a relative or following an object in a complex background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the eciency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will particularly focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.
LIDAR pulse coding for high resolution range imaging at improved refresh rate.
Kim, Gunzung; Park, Yongwan
2016-10-17
In this study, a light detection and ranging system (LIDAR) was designed that codes pixel location information in its laser pulses using the direct- sequence optical code division multiple access (DS-OCDMA) method in conjunction with a scanning-based microelectromechanical system (MEMS) mirror. This LIDAR can constantly measure the distance without idle listening time for the return of reflected waves because its laser pulses include pixel location information encoded by applying the DS-OCDMA. Therefore, this emits in each bearing direction without waiting for the reflected wave to return. The MEMS mirror is used to deflect and steer the coded laser pulses in the desired bearing direction. The receiver digitizes the received reflected pulses using a low-temperature-grown (LTG) indium gallium arsenide (InGaAs) based photoconductive antenna (PCA) and the time-to-digital converter (TDC) and demodulates them using the DS-OCDMA. When all of the reflected waves corresponding to the pixels forming a range image are received, the proposed LIDAR generates a point cloud based on the time-of-flight (ToF) of each reflected wave. The results of simulations performed on the proposed LIDAR are compared with simulations of existing LIDARs.
Fractal Particles: Titan's Thermal Structure and IR Opacity
NASA Technical Reports Server (NTRS)
McKay, C. P.; Rannou, P.; Guez, L.; Young, E. F.; DeVincenzi, Donald (Technical Monitor)
1998-01-01
Titan's haze particles are the principle opacity at solar wavelengths. Most past work in modeling these particles has assumed spherical particles. However, observational evidence strongly favors fractal shapes for the haze particles. We consider the implications of fractal particles for the thermal structure and near infrared opacity of Titan's atmosphere. We find that assuming fractal particles with the optical properties based on laboratory tholin material and with a production rate that allows for a match to the geometric albedo results in warmer troposphere and surface temperatures compared to spherical particles. In the near infrared (1-3 microns) the predicted opacity of the fractal particles is up to a factor of two less than for spherical particles. This has implications for the ability of Cassini to image Titan's surface at 1 micron.
Fractal analysis of circulating platelets in type 2 diabetic patients.
Bianciardi, G; Tanganelli, I
2015-01-01
This paper investigates the use of computerized fractal analysis for objective characterization by means of transmission electron microscopy of the complexity of circulating platelets collected from healthy individuals and from type 2 diabetic patients, a pathologic condition in which platelet hyperreactivity has been described. Platelet boundaries were extracted by means of automatically image analysis. Local fractal dimension by box counting (measure of geometric complexity) was automatically calculated. The results showed that the platelet boundary observed by electron microscopy is fractal and that the shape of the circulating platelets is significantly more complex in the diabetic patients in comparison to healthy subjects (p < 0.01), with 100% correct classification. In vitro activated platelets from healthy subjects show an analogous increase of geometric complexity. Computerized fractal analysis of platelet shape by transmission electron microscopy can provide accurate, quantitative, data to study platelet activation in diabetes mellitus.
Is fractal geometry useful in medicine and biomedical sciences?
Heymans, O; Fissette, J; Vico, P; Blacher, S; Masset, D; Brouers, F
2000-03-01
Fractal geometry has become very useful in the understanding of many phenomena in various fields such as astrophysics, economy or agriculture and recently in medicine. After a brief intuitive introduction to the basis of fractal geometry, the clue is made about the correlation between Df and the complexity or the irregularity of a structure. However, fractal analysis must be applied with certain caution in natural objects such as bio-medical ones. The cardio-vascular system remains one of the most important fields of application of these kinds of approach. Spectral analysis of the R-R interval, morphology of the distal coronary arteries constitute two examples. Other very interesting applications are founded in bacteriology, medical imaging or ophthalmology. In our institution, we apply fractal analysis in order to quantitate angiogenesis and other vascular processes.
Fractal characterization and wettability of ion treated silicon surfaces
NASA Astrophysics Data System (ADS)
Yadav, R. P.; Kumar, Tanuj; Baranwal, V.; Vandana, Kumar, Manvendra; Priya, P. K.; Pandey, S. N.; Mittal, A. K.
2017-02-01
Fractal characterization of surface morphology can be useful as a tool for tailoring the wetting properties of solid surfaces. In this work, rippled surfaces of Si (100) are grown using 200 keV Ar+ ion beam irradiation at different ion doses. Relationship between fractal and wetting properties of these surfaces are explored. The height-height correlation function extracted from atomic force microscopic images, demonstrates an increase in roughness exponent with an increase in ion doses. A steep variation in contact angle values is found for low fractal dimensions. Roughness exponent and fractal dimensions are found correlated with the static water contact angle measurement. It is observed that after a crossover of the roughness exponent, the surface morphology has a rippled structure. Larger values of interface width indicate the larger ripples on the surface. The contact angle of water drops on such surfaces is observed to be lowest. Autocorrelation function is used for the measurement of ripple wavelength.
Magnetic Resonance Image Phantom Code System to Calibrate in vivo Measurement Systems.
HICKMAN, DAVE
1997-07-17
Version 00 MRIPP provides relative calibration factors for the in vivo measurement of internally deposited photon emitting radionuclides within the human body. The code includes a database of human anthropometric structures (phantoms) that were constructed from whole body Magnetic Resonance Images. The database contains a large variety of human images with varying anatomical structure. Correction factors are obtained using Monte Carlo transport of photons through the voxel geometry of the phantom. Correction factors provided by MRIPP allow users of in vivo measurement systems (e.g., whole body counters) to calibrate these systems with simple sources and obtain subject specific calibrations. Note that the capability to format MRI data for use with this system is not included; therefore, one must use the phantom data included in this package. MRIPP provides a simple interface to perform Monte Carlo simulation of photon transport through the human body. MRIPP also provides anthropometric information (e.g., height, weight, etc.) for individuals used to generate the phantom database. A modified Voxel version of the Los Alamos National Laboratory MCNP4A code is used for the Monte Carlo simulation. The Voxel version Fortran patch to MCNP4 and MCNP4A (Monte Carlo N-Particle transport simulation) and the MCNP executable are included in this distribution, but the MCNP Fortran source is not included. It was distributed by RSICC as CCC-200 but is now obsoleted by the current release MCNP4B.
Cellular-resolution population imaging reveals robust sparse coding in the Drosophila Mushroom Body
Honegger, Kyle S.; Campbell, Robert A. A.; Turner, Glenn C.
2011-01-01
Sensory stimuli are represented in the brain by the activity of populations of neurons. In most biological systems, studying population coding is challenging since only a tiny proportion of cells can be recorded simultaneously. Here we used 2-photon imaging to record neural activity in the relatively simple Drosophila mushroom body (MB), an area involved in olfactory learning and memory. Using the highly sensitive calcium indicator, GCaMP3, we simultaneously monitored the activity of >100 MB neurons in vivo (about 5% of the total population). The MB is thought to encode odors in sparse patterns of activity, but the code has yet to be explored either on a population level or with a wide variety of stimuli. We therefore imaged responses to odors chosen to evaluate the robustness of sparse representations. Different odors activated distinct patterns of MB neurons, however we found no evidence for spatial organization of neurons by either response probability or odor tuning within the cell body layer. The degree of sparseness was consistent across a wide range of stimuli, from monomolecular odors to artificial blends and even complex natural smells. Sparseness was mainly invariant across concentrations, largely because of the influence of recent odor experience. Finally, in contrast to sensory processing in other systems, no response features distinguished natural stimuli from monomolecular odors. Our results indicate that the fundamental feature of odor processing in the MB is to create sparse stimulus representations in a format that facilitates arbitrary associations between odor and punishment or reward. PMID:21849538
Adaptive three-dimensional motion-compensated wavelet transform for image sequence coding
NASA Astrophysics Data System (ADS)
Leduc, Jean-Pierre
1994-09-01
This paper describes a 3D spatio-temporal coding algorithm for the bit-rate compression of digital-image sequences. The coding scheme is based on different specificities namely, a motion representation with a four-parameter affine model, a motion-adapted temporal wavelet decomposition along the motion trajectories and a signal-adapted spatial wavelet transform. The motion estimation is performed on the basis of four-parameter affine transformation models also called similitude. This transformation takes into account translations, rotations and scalings. The temporal wavelet filter bank exploits bi-orthogonal linear-phase dyadic decompositions. The 2D spatial decomposition is based on dyadic signal-adaptive filter banks with either para-unitary or bi-orthogonal bases. The adaptive filtering is carried out according to a performance criterion to be optimized under constraints in order to eventually maximize the compression ratio at the expense of graceful degradations of the subjective image quality. The major principles of the present technique is, in the analysis process, to extract and to separate the motion contained in the sequences from the spatio-temporal redundancy and, in the compression process, to take into account of the rate-distortion function on the basis of the spatio-temporal psycho-visual properties to achieve the most graceful degradations. To complete this description of the coding scheme, the compression procedure is therefore composed of scalar quantizers which exploit the spatio-temporal 3D psycho-visual properties of the Human Visual System and of entropy coders which finalize the bit rate compression.
Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947
Nonlinear spike-and-slab sparse coding for interpretable image encoding.
Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Huang, F.; Peng, R. D.; Liu, Y. H.; Chen, Z. Y.; Ye, M. F.; Wang, L.
2012-09-15
Fractal dust grains of different shapes are observed in a radially confined magnetized radio frequency plasma. The fractal dimensions of the dust structures in two-dimensional (2D) horizontal dust layers are calculated, and their evolution in the dust growth process is investigated. It is found that as the dust grains grow the fractal dimension of the dust structure decreases. In addition, the fractal dimension of the center region is larger than that of the entire region in the 2D dust layer. In the initial growth stage, the small dust particulates at a high number density in a 2D layer tend to fill space as a normal surface with fractal dimension D = 2. The mechanism of the formation of fractal dust grains is discussed.
Fractals in geology and geophysics
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1989-01-01
The definition of a fractal distribution is that the number of objects N with a characteristic size greater than r scales with the relation N of about r exp -D. The frequency-size distributions for islands, earthquakes, fragments, ore deposits, and oil fields often satisfy this relation. This application illustrates a fundamental aspect of fractal distributions, scale invariance. The requirement of an object to define a scale in photograhs of many geological features is one indication of the wide applicability of scale invariance to geological problems; scale invariance can lead to fractal clustering. Geophysical spectra can also be related to fractals; these are self-affine fractals rather than self-similar fractals. Examples include the earth's topography and geoid.
Multilayer adsorption on fractal surfaces.
Vajda, Péter; Felinger, Attila
2014-01-10
Multilayer adsorption is often observed in liquid chromatography. The most frequently employed model for multilayer adsorption is the BET isotherm equation. In this study we introduce an interpretation of multilayer adsorption measured on liquid chromatographic stationary phases based on the fractal theory. The fractal BET isotherm model was successfully used to determine the apparent fractal dimension of the adsorbent surface. The nonlinear fitting of the fractal BET equation gives us the estimation of the adsorption equilibrium constants and the monolayer saturation capacity of the adsorbent as well. In our experiments, aniline and proline were used as test molecules on reversed phase and normal phase columns, respectively. Our results suggest an apparent fractal dimension 2.88-2.99 in the case of reversed phase adsorbents, in the contrast with a bare silica column with a fractal dimension of 2.54.
Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM
NASA Astrophysics Data System (ADS)
Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.
2004-01-01
In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a
The Calculation of Fractal Dimension in the Presence of Non-Fractal Clutter
NASA Technical Reports Server (NTRS)
Herren, Kenneth A.; Gregory, Don A.
1999-01-01
The area of information processing has grown dramatically over the last 50 years. In the areas of image processing and information storage the technology requirements have far outpaced the ability of the community to meet demands. The need for faster recognition algorithms and more efficient storage of large quantities of data has forced the user to accept less than lossless retrieval of that data for analysis. In addition to clutter that is not the object of interest in the data set, often the throughput requirements forces the user to accept "noisy" data and to tolerate the clutter inherent in that data. It has been shown that some of this clutter, both the intentional clutter (clouds, trees, etc) as well as the noise introduced on the data by processing requirements can be modeled as fractal or fractal-like. Traditional methods using Fourier deconvolution on these sources of noise in frequency space leads to loss of signal and can, in many cases, completely eliminate the target of interest. The parameters that characterize fractal-like noise (predominately the fractal dimension) have been investigated and a technique to reduce or eliminate noise from real scenes has been developed. Examples of clutter reduced images are presented.
Comparison of different quantization strategies for subband coding of medical images
NASA Astrophysics Data System (ADS)
Castagno, Roberto; Lancini, Rosa C.; Egger, Olivier
1996-04-01
In this paper different methods for the quantization of wavelet transform coefficients are compared in view of medical imaging applications. The goal is to provide users with a comprehensive and application-oriented review of these techniques. The performance of four quantization methods (namely standard scalar quantization, embedded zerotree, variable dimension vector quantization and pyramid vector quantization) are compared with regard to their application in the field of medical imaging. In addition to the standard rate-distortion criterion, we took into account the possibility of bitrate control, the feasibility of real-time implementation, the genericity (for use in non-dedicated multimedia environments) of each approach. In addition, the diagnostical reliability of the decompressed images has been assessed during a viewing session and with the help of a specialist. Classical scalar quantization methods are briefly reviewed. As a result, it is shown that despite the relatively simple design of the optimum quantizers, their performance in terms of rate-distortion tradeoff are quite poor. For high quality subband coding, it is of major importance to exploit the existing zero-correlation across subbands as proposed with the embedded zerotree wavelet (EZW) algorithm. In this paper an improved EZW-algorithm is used which is termed embedded zerotree lossless (EZL) algorithm -- due to the importance of lossless compression in medical imaging applications -- having the additional possibility of producing an embedded lossless bitstream. VQ based methods take advantage of statistical properties of a block or a vector of data values, yielding good quality results of reconstructed images at the same bitrates. In this paper, we take in account two classes of VQ methods, random quantizers (VQ) and geometric quantizers (PVQ). Algorithms belonging to the first group (the most widely known being that developed by Linde-Buzo-Gray) suffer from the common drawback of requiring a
NASA Astrophysics Data System (ADS)
Latka, Miroslaw; Glaubic-Latka, Marta; Latka, Dariusz; West, Bruce J.
2004-04-01
We study the middle cerebral artery blood flow velocity (MCAfv) in humans using transcranial Doppler ultrasonography (TCD). Scaling properties of time series of the axial flow velocity averaged over a cardiac beat interval may be characterized by two exponents. The short time scaling exponent (STSE) determines the statistical properties of fluctuations of blood flow velocities in short-time intervals while the Hurst exponent describes the long-term fractal properties. In many migraineurs the value of the STSE is significantly reduced and may approach that of the Hurst exponent. This change in dynamical properties reflects the significant loss of short-term adaptability and the overall hyperexcitability of the underlying cerebral blood flow control system. We call this effect fractal rigidity.
Fractal polyzirconosiloxane cluster coatings
Sugama, T.
1992-08-01
Fractal polyzirconosiloxane (PZS) cluster films were prepared through the hydrolysis-polycondensation-pyrolysis synthesis of two-step HCl acid-NaOH base catalyzed sol precursors consisting of N-[3-(triethoxysilyl)propyl]-4,5-dihydroimidazole, Zr(OC{sub 3}H{sub 7}){sub 4}, methanol, and water. When amorphous PZSs were applied to aluminum as protective coatings against NaCl-induced corrosion, the effective film was that derived from the sol having a pH near the isoelectric point in the positive zeta potential region. The following four factors played an important role in assembling the protective PZS coating films: (1) a proper rate of condensation, (2) a moderate ratio of Si-O-Si to Si-O-Zr linkages formed in the PZS network, (3) hydrophobic characteristics, and (4) a specific microstructural geometry, in which large fractal clusters were linked together.
Fractal polyzirconosiloxane cluster coatings
Sugama, T.
1992-01-01
Fractal polyzirconosiloxane (PZS) cluster films were prepared through the hydrolysis-polycondensation-pyrolysis synthesis of two-step HCl acid-NaOH base catalyzed sol precursors consisting of N-(3-(triethoxysilyl)propyl)-4,5-dihydroimidazole, Zr(OC{sub 3}H{sub 7}){sub 4}, methanol, and water. When amorphous PZSs were applied to aluminum as protective coatings against NaCl-induced corrosion, the effective film was that derived from the sol having a pH near the isoelectric point in the positive zeta potential region. The following four factors played an important role in assembling the protective PZS coating films: (1) a proper rate of condensation, (2) a moderate ratio of Si-O-Si to Si-O-Zr linkages formed in the PZS network, (3) hydrophobic characteristics, and (4) a specific microstructural geometry, in which large fractal clusters were linked together.
Church, E.L.
1988-04-15
Surface finish measurements are usually fitted to models of the finish correlation function which are parametrized in terms of root-mean-square roughnesses, sigma, and correlation lengths, l. Highly finished optical surfaces, however, are frequently better described by fractal models, which involve inverse power-law spectra and are parametrized by spectral strengths, K/sub n/, and spectral indices, n. Analyzing measurements of fractal surfaces in terms of sigma and l gives results which are not intrinsic surface parameters but which depend on the bandwidth parameters of the measurement process used. This paper derives expressions for these pseudoparameters and discusses the errors involved in using them for the characterization and specification of surface finish.
Church, E.L.
1988-01-01
Surface finish measurements are usually fitted to models of the finish correlation function which are parameterized in terms of root-mean-square roughness, sigma, and correlation lengths, l. Highly-finished optical surfaces, however, are frequently better described by fractal models, which involve inverse-power-law spectra and are parameterized by spectral strengths, K/sub n/, and spectral indices, n. Analyzing measurements of fractal surfaces in terms of sigma and l gives results which are not intrinsic surface parameters but which depend on the bandwidth parameters of the measurement process used. This paper derives expressions for these pseudo parameters and discusses the errors involved in using them for the characterization and specification of surface finish. 30 refs., 5 figs., 1 tab.
NASA Astrophysics Data System (ADS)
Jun, Xie Cheng; Su, Yan; Wei, Zhang
2006-08-01
In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.
Turbulent wakes of fractal objects.
Staicu, Adrian; Mazzi, Biagio; Vassilicos, J C; van de Water, Willem
2003-06-01
Turbulence of a windtunnel flow is stirred using objects that have a fractal structure. The strong turbulent wakes resulting from three such objects which have different fractal dimensions are probed using multiprobe hot-wire anemometry in various configurations. Statistical turbulent quantities are studied within inertial and dissipative range scales in an attempt to relate changes in their self-similar behavior to the scaling of the fractal objects.
Langevin Equation on Fractal Curves
NASA Astrophysics Data System (ADS)
Satin, Seema; Gangal, A. D.
2016-07-01
We analyze random motion of a particle on a fractal curve, using Langevin approach. This involves defining a new velocity in terms of mass of the fractal curve, as defined in recent work. The geometry of the fractal curve, plays an important role in this analysis. A Langevin equation with a particular model of noise is proposed and solved using techniques of the Fα-Calculus.
Fractal multifiber microchannel plates
NASA Technical Reports Server (NTRS)
Cook, Lee M.; Feller, W. B.; Kenter, Almus T.; Chappell, Jon H.
1992-01-01
The construction and performance of microchannel plates (MCPs) made using fractal tiling mehtods are reviewed. MCPs with 40 mm active areas having near-perfect channel ordering were produced. These plates demonstrated electrical performance characteristics equivalent to conventionally constructed MCPs. These apparently are the first MCPs which have a sufficiently high degree of order to permit single channel addressability. Potential applications for these devices and the prospects for further development are discussed.
NASA Astrophysics Data System (ADS)
Martin, Demetri
2015-03-01
Demetri Maritn prepared this palindromic poem as his project for Michael Frame's fractal geometry class at Yale. Notice the first, fourth, and seventh words in the second and next-to-second lines are palindromes, the first two and last two lines are palindromes, the middle line, "Be still if I fill its ebb" minus its last letter is a palindrome, and the entire poem is a palindrome...
Fractal Properties in Economics
2000-01-01
field of science, econophysics , was established recently in 1997 [2, 3]. This is a study of economic phenomena based on the methods and approaches of...physics. Among 244 many topics in econophysics there are three topics that are closely related to the study of fractals. They are price changes in open...prices, J. of Bussiness (Chicago) 36 (1963) pp.3 9 4 -4 19 . 2. J. Kertesz and 1. Kondor (Eds.), Econophysics : an emerging science (Kluwer Academic
NASA Astrophysics Data System (ADS)
Kaissas, I.; Papadimitropoulos, C.; Potiriadis, C.; Karafasoulis, K.; Loukas, D.; Lambropoulos, C. P.
2017-01-01
Coded aperture imaging transcends planar imaging with conventional collimators in efficiency and Field of View (FOV). We present experimental results for the detection of 141 keV and 122 keV γ-photons emitted by uniformly extended 99mTc and 57Co hot-spots along with simulations of uniformly and normally extended 99mTc hot-spots. These results prove that the method can be used for intra-operative imaging of radio-traced sentinel nodes and thyroid remnants. The study is performed using a setup of two gamma cameras, each consisting of a coded-aperture (or mask) of Modified Uniformly Redundant Array (MURA) of rank 19 positioned on top of a CdTe detector. The detector pixel pitch is 350 μm and its active area is 4.4 × 4.4 cm2, while the mask element size is 1.7 mm. The detectable photon energy ranges from 15 keV up to 200 keV with an energy resolution of 3–4 keV FWHM. Triangulation is exploited to estimate the 3D spatial coordinates of the radioactive spots within the system FOV. Two extended sources, with uniform distributed activity (11 and 24 mm in diameter, respectively), positioned at 16 cm from the system and with 3 cm distance between their centers, can be resolved and localized with accuracy better than 5%. The results indicate that the estimated positions of spatially extended sources lay within their volume size and that neighboring sources, even with a low level of radioactivity, such as 30 MBq, can be clearly distinguished with an acquisition time about 3 seconds.
Darwinian Evolution and Fractals
NASA Astrophysics Data System (ADS)
Carr, Paul H.
2009-05-01
Did nature's beauty emerge by chance or was it intelligently designed? Richard Dawkins asserts that evolution is blind aimless chance. Michael Behe believes, on the contrary, that the first cell was intelligently designed. The scientific evidence is that nature's creativity arises from the interplay between chance AND design (laws). Darwin's ``Origin of the Species,'' published 150 years ago in 1859, characterized evolution as the interplay between variations (symbolized by dice) and the natural selection law (design). This is evident in recent discoveries in DNA, Madelbrot's Fractal Geometry of Nature, and the success of the genetic design algorithm. Algorithms for generating fractals have the same interplay between randomness and law as evolution. Fractal statistics, which are not completely random, characterize such phenomena such as fluctuations in the stock market, the Nile River, rainfall, and tree rings. As chaos theorist Joseph Ford put it: God plays dice, but the dice are loaded. Thus Darwin, in discovering the evolutionary interplay between variations and natural selection, was throwing God's dice!
Fractals in physiology and medicine
NASA Technical Reports Server (NTRS)
Goldberger, Ary L.; West, Bruce J.
1987-01-01
The paper demonstrates how the nonlinear concepts of fractals, as applied in physiology and medicine, can provide an insight into the organization of such complex structures as the tracheobronchial tree and heart, as well as into the dynamics of healthy physiological variability. Particular attention is given to the characteristics of computer-generated fractal lungs and heart and to fractal pathologies in these organs. It is shown that alterations in fractal scaling may underlie a number of pathophysiological disturbances, including sudden cardiac death syndromes.
Dimension of fractal basin boundaries
Park, B.S.
1988-01-01
In many dynamical systems, multiple attractors coexist for certain parameter ranges. The set of initial conditions that asymptotically approach each attractor is its basin of attraction. These basins can be intertwined on arbitrary small scales. Basin boundary can be either smooth or fractal. Dynamical systems that have fractal basin boundary show final state sensitivity of the initial conditions. A measure of this sensitivity (uncertainty exponent {alpha}) is related to the dimension of the basin boundary d = D - {alpha}, where D is the dimension of the phase space and d is the dimension of the basin boundary. At metamorphosis values of the parameter, there might happen a conversion from smooth to fractal basin boundary (smooth-fractal metamorphosis) or a conversion from fractal to another fractal basin boundary characteristically different from the previous fractal one (fractal-fractal metamorphosis). The dimension changes continuously with the parameter except at the metamorphosis values where the dimension of the basin boundary jumps discontinuously. We chose the Henon map and the forced damped pendulum to investigate this. Scaling of the basin volumes near the metamorphosis values of the parameter is also being studied for the Henon map. Observations are explained analytically by using low dimensional model map.
Investigating Fractal Geometry Using LOGO.
ERIC Educational Resources Information Center
Thomas, David A.
1989-01-01
Discusses dimensionality in Euclidean geometry. Presents methods to produce fractals using LOGO. Uses the idea of self-similarity. Included are program listings and suggested extension activities. (MVL)
Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.
2016-01-01
Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543
Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J
2016-01-01
A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.
NASA Astrophysics Data System (ADS)
McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley
2007-09-01
Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.
The Use of Fractals for the Study of the Psychology of Perception:
NASA Astrophysics Data System (ADS)
Mitina, Olga V.; Abraham, Frederick David
The present article deals with perception of time (subjective assessment of temporal intervals), complexity and aesthetic attractiveness of visual objects. The experimental research for construction of functional relations between objective parameters of fractals' complexity (fractal dimension and Lyapunov exponent) and subjective perception of their complexity was conducted. As stimulus material we used the program based on Sprott's algorithms for the generation of fractals and the calculation of their mathematical characteristics. For the research 20 fractals were selected which had different fractal dimensions that varied from 0.52 to 2.36, and the Lyapunov exponent from 0.01 to 0.22. We conducted two experiments: (1) A total of 20 fractals were shown to 93 participants. The fractals were displayed on the screen of a computer for randomly chosen time intervals ranging from 5 to 20 s. For each fractal displayed, the participant responded with a rating of the complexity and attractiveness of the fractal using ten-point scale with an estimate of the duration of the presentation of the stimulus. Each participant also answered the questions of some personality tests (Cattell and others). The main purpose of this experiment was the analysis of the correlation between personal characteristics and subjective perception of complexity, attractiveness, and duration of fractal's presentation. (2) The same 20 fractals were shown to 47 participants as they were forming on the screen of the computer for a fixed interval. Participants also estimated subjective complexity and attractiveness of fractals. The hypothesis on the applicability of the Weber-Fechner law for the perception of time, complexity and subjective attractiveness was confirmed for measures of dynamical properties of fractal images.
Fractal analysis of Xylella fastidiosa biofilm formation
NASA Astrophysics Data System (ADS)
Moreau, A. L. D.; Lorite, G. S.; Rodrigues, C. M.; Souza, A. A.; Cotta, M. A.
2009-07-01
We have investigated the growth process of Xylella fastidiosa biofilms inoculated on a glass. The size and the distance between biofilms were analyzed by optical images; a fractal analysis was carried out using scaling concepts and atomic force microscopy images. We observed that different biofilms show similar fractal characteristics, although morphological variations can be identified for different biofilm stages. Two types of structural patterns are suggested from the observed fractal dimensions Df. In the initial and final stages of biofilm formation, Df is 2.73±0.06 and 2.68±0.06, respectively, while in the maturation stage, Df=2.57±0.08. These values suggest that the biofilm growth can be understood as an Eden model in the former case, while diffusion-limited aggregation (DLA) seems to dominate the maturation stage. Changes in the correlation length parallel to the surface were also observed; these results were correlated with the biofilm matrix formation, which can hinder nutrient diffusion and thus create conditions to drive DLA growth.
Fractal properties of macrophage membrane studied by AFM.
Bitler, A; Dover, R; Shai, Y
2012-12-01
Complexity of cell membrane poses difficulties to quantify corresponding morphology changes during cell proliferation and damage. We suggest using fractal dimension of the cell membrane to quantify its complexity and track changes produced by various treatments. Glutaraldehyde fixed mouse RAW 264.7 macrophage membranes were chosen as model system and imaged in PeakForce QNM (quantitative nanomechanics) mode of AFM (atomic force microscope). The morphology of the membranes was characterized by fractal dimension. The parameter was calculated for set of AFM images by three different methods. The same calculations were done for the AFM images of macrophages treated with colchicine, an inhibitor of the microtubule polymerization, and microtubule stabilizing agent taxol. We conclude that fractal dimension can be additional and useful parameter to characterize the cell membrane complexity and track the morphology changes produced by different treatments.
NASA Astrophysics Data System (ADS)
Yakopcic, Chris; Taha, Tarek M.; Shin, Eunsung; Subramanyam, Guru; Murray, P. Terrence; Rogers, Stanley
2010-08-01
The memristor, experimentally verified for the first time in 2008, is one of four fundamental passive circuit elements (the others being resistors, capacitors, and inductors). Development and characterization of memristor devices and the design of novel computing architectures based on these devices can potentially provide significant advances in intelligence processing systems for a variety of applications including image processing, robotics, and machine learning. In particular, adaptive coded aperture (diffraction) sensing, an emerging technology enabling real-time, wide-area IR/visible sensing and imaging, could benefit from new high performance biologically inspired image processing architectures based on memristors. In this paper, we present results from the fabrication and characterization of memristor devices utilizing titanium oxide dielectric layers in a parallel plate conuration. Two versions of memristor devices have been fabricated at the University of Dayton and the Air Force Research Laboratory utilizing varying thicknesses of the TiO2 dielectric layers. Our results show that the devices do exhibit the characteristic hysteresis loop in their I-V plots.
Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1991-01-01
A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.
Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.
Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre
2008-12-01
Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.
Fractal analysis of bone structure with applications to osteoporosis and microgravity effects
Acharya, R.S.; Swarnarkar, V.; Krishnamurthy, R.; Hausman, E.; LeBlanc, A.; Lin, C.; Shackelford, L.
1995-12-31
The authors characterize the trabecular structure with the aid of fractal dimension. The authors use Alternating Sequential filters to generate a nonlinear pyramid for fractal dimension computations. The authors do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of the scheme is the rudimentary definition of self similarity. This allows them the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, the authors have shown that the ASF methods outperform other existing methods for fractal dimension estimation. They have shown that the fractal dimension remains the same when computed with both the X-Ray images and the MRI images of the patella. They have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, the authors have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, they have shown that the subject`s prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.
NASA Astrophysics Data System (ADS)
Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto
2015-08-01
We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.
NASA Astrophysics Data System (ADS)
Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.
2015-03-01
Non-contact camera-based imaging photoplethysmography (iPPG) is useful for measuring heart rate in conditions where contact devices are problematic due to issues such as mobility, comfort, and sanitation. Existing iPPG methods analyse the light-tissue interaction of either active or passive (ambient) illumination. Many active iPPG methods assume the incident ambient light is negligible to the active illumination, resulting in high power requirements, while many passive iPPG methods assume near-constant ambient conditions. These assumptions can only be achieved in environments with controlled illumination and thus constrain the use of such devices. To increase the number of possible applications of iPPG devices, we propose a dual-mode active iPPG system that is robust to changes in ambient illumination variations. Our system uses a temporally-coded illumination sequence that is synchronized with the camera to measure both active and ambient illumination interaction for determining heart rate. By subtracting the ambient contribution, the remaining illumination data can be attributed to the controlled illuminant. Our device comprises a camera and an LED illuminant controlled by a microcontroller. The microcontroller drives the temporal code via synchronizing the frame captures and illumination time at the hardware level. By simulating changes in ambient light conditions, experimental results show our device is able to assess heart rate accurately in challenging lighting conditions. By varying the temporal code, we demonstrate the trade-off between camera frame rate and ambient light compensation for optimal blood pulse detection.
Fractal analysis of lumbar vertebral cancellous bone architecture.
Feltrin, G P; Macchi, V; Saccavini, C; Tosi, E; Dus, C; Fassina, A; Parenti, A; De Caro, R
2001-11-01
Osteoporosis is characterized by bone mineral density (BMD) decreasing and spongy bone rearrangement with consequent loss of elasticity and increased bone fragility. Quantitative computed tomography (QCT) quantifies bone mineral content but does not describe spongy architecture. Analysis of trabecular pattern may provide additional information to evaluate osteoporosis. The aim of this study was to determine whether the fractal analysis of the microradiography of lumbar vertebrae provides a reliable assessment of bone texture, which correlates with the BMD. The lumbar segment of the spine was removed from 22 cadavers with no history of back pain and examined with standard x-ray, traditional tomography, and quantitative computed tomography to measure BMD. The fractal dimension, which quantifies the image fractal complexity, was calculated on microradiographs of axial sections of the fourth lumbar vertebra to determine its characteristic spongy network. The relationship between the values of the BMD and those of the fractal dimension was evaluated by linear regression and a statistically significant correlation (R = 0.96) was found. These findings suggest that the application of fractal analysis to radiological analyses can provide valuable information on the trabecular pattern of vertebrae. Thus, fractal dimensions of trabecular bone structure should be considered as a supplement to BMD evaluation in the assessment of osteoporosis.
Novel lossless FMRI image compression based on motion compensation and customized entropy coding.
Sanchez, Victor; Nasiopoulos, Panos; Abugharbieh, Rafeef
2009-07-01
We recently proposed a method for lossless compression of 4-D medical images based on the advanced video coding standard (H.264/AVC). In this paper, we present two major contributions that enhance our previous work for compression of functional MRI (fMRI) data: 1) a new multiframe motion compensation process that employs 4-D search, variable-size block matching, and bidirectional prediction; and 2) a new context-based adaptive binary arithmetic coder designed for lossless compression of the residual and motion vector data. We validate our method on real fMRI sequences of various resolutions and compare the performance to two state-of-the-art methods: 4D-JPEG2000 and H.264/AVC. Quantitative results demonstrate that our proposed technique significantly outperforms current state of the art with an average compression ratio improvement of 13%.
Conditional Entropy-Constrained Residual VQ with Application to Image Coding
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1996-01-01
This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.
Optimal design of FIR triplet halfband filter bank and application in image coding.
Kha, H H; Tuan, H D; Nguyen, T Q
2011-02-01
This correspondence proposes an efficient semidefinite programming (SDP) method for the design of a class of linear phase finite impulse response triplet halfband filter banks whose filters have optimal frequency selectivity for a prescribed regularity order. The design problem is formulated as the minimization of the least square error subject to peak error constraints and regularity constraints. By using the linear matrix inequality characterization of the trigonometric semi-infinite constraints, it can then be exactly cast as a SDP problem with a small number of variables and, hence, can be solved efficiently. Several design examples of the triplet halfband filter bank are provided for illustration and comparison with previous works. Finally, the image coding performance of the filter bank is presented.
Optimization and implementation of the integer wavelet transform for image coding.
Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella
2002-01-01
This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.
A multi-layer VLC imaging system based on space-time trace-orthogonal coding
NASA Astrophysics Data System (ADS)
Li, Peng-Xu; Yang, Yu-Hong; Zhu, Yi-Jun; Zhang, Yan-Yu
2017-02-01
In visible light communication (VLC) imaging systems, different properties of data are usually demanded for transmission with different priorities in terms of reliability and/or validity. For this consideration, a novel transmission scheme called space-time trace-orthogonal coding (STTOC) for VLC is proposed in this paper by taking full advantage of the characteristics of time-domain transmission and space-domain orthogonality. Then, several constellation designs for different priority strategies subject to the total power constraint are presented. One significant advantage of this novel scheme is that the inter-layer interference (ILI) can be eliminated completely and the computation complexity of maximum likelihood (ML) detection is linear. Computer simulations verify the correctness of our theoretical analysis, and demonstrate that both transmission rate and error performance of the proposed scheme greatly outperform the conventional multi-layer transmission system.
Cochenour, Brandon; Mullen, Linda; Muth, John
2011-11-20
Optical detection, ranging, and imaging of targets in turbid water is complicated by absorption and scattering. It has been shown that using a pulsed laser source with a range-gated receiver or an intensity modulated source with a coherent RF receiver can improve target contrast in turbid water. A blended approach using a modulated-pulse waveform has been previously suggested as a way to further improve target contrast. However only recently has a rugged and reliable laser source been developed that is capable of synthesizing such a waveform so that the effect of the underwater environment on the propagation of a modulated pulse can be studied. In this paper, we outline the motivation for the modulated-pulse (MP) concept, and experimentally evaluate different MP waveforms: single-tone MP and pseudorandom coded MP sequences.
Natural image sequences constrain dynamic receptive fields and imply a sparse code.
Häusler, Chris; Susemihl, Alex; Nawrot, Martin P
2013-11-06
In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input.
Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin
2017-03-10
We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.
Lopez Villaverde, Eduardo; Robert, Sebastien; Prada, Claire
2017-04-03
In this work, defects in a high density polyethylene pipe are imaged with the total focusing method. The viscoelastic attenuation of this material greatly reduces the signal level and leads to a poor signal-to-noise ratio due to electronic noise. To improve the image quality, the decomposition of the time reversal operator method is combined with the spatial Hadamard coded transmissions before calculating images in the time domain. Because the Hadamard coding is not compatible with conventional imaging systems, the paper proposes two modified coding methods based on sparse Hadamard matrices with +1/0 coefficients. The signal-to-noise ratios expected with the different spatial codes are demonstrated, then validated on both simulated and experimental data. Experiments are performed with a transducer array in contact with the base material of a polyethylene pipe. In order to improve the noise filtering procedure, the singular values associated with electronic noise are expressed on the basis of the random matrix theory. This model of noise singular values allows a better identification of the defect response in noisy experimental data. Lastly, the imaging method is evaluated in a more industrial inspection configuration where an immersion array probe is used to image defects in a butt fusion weld with a complex geometry.
Fractal analysis: methodologies for biomedical researchers.
Ristanović, Dusan; Milosević, Nebojsa T
2012-01-01
Fractal analysis has become a popular method in all branches of scientific investigations including biology and medicine. Although there is a growing interest in the application of fractal analysis in biological sciences, questions about the methodology of fractal analysis have partly restricted its wider and comprehensible application. It is a notable fact that fractal analysis is derived from fractal geometry, but there are some unresolved issues that need to be addressed. In this respect, we discuss several related underlying principles for fractal analysis and establish the meaningful relationship between fractal analysis and fractal geometry. Since some concepts in fractal analysis are determined descriptively and/or qualitatively, this paper provides their exact mathematical definitions or explanations. Another aim of this study is to show that nowadays fractal analysis is an independent mathematical and experimental method based on Mandelbrot's fractal geometry, Euclidean traditiontal geometry and Richardson's coastline method.
Relationship between Fractal Dimension and Agreeability of Facial Imagery
NASA Astrophysics Data System (ADS)
Oyama-Higa, Mayumi; Miao, Tiejun; Ito, Tasuo
2007-11-01
Why do people feel happy and good or equivalently empathize more, with smiling face imageries than with ones of expressionless face? To understand what the essential factors are underlying imageries in relating to the feelings, we conducted an experiment by 84 subjects asked to estimate the degree of agreeability about expressionless and smiling facial images taken from 23 young persons to whom the subjects were no any pre-acquired knowledge. Images were presented one at a time to each subject who was asked to rank agreeability on a scale from 1 to 10. Fractal dimensions of facial images were obtained in order to characterize the complexity of the imageries by using of two types of fractal analysis methods, i.e., planar and cubic analysis methods, respectively. The results show a significant difference in the fractal dimension values between expressionless faces and smiling ones. Furthermore, we found a well correlation between the degree of agreeability and fractal dimensions, implying that the fractal dimension optically obtained in relation to complexity in imagery information is useful to characterize the psychological processes of cognition and awareness.
NASA Astrophysics Data System (ADS)
Sui, Liansheng; Xu, Minjie; Tian, Ailing
2017-04-01
A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.
Application of fractal dimensions to study the structure of flocs formed in lime softening process.
Vahedi, Arman; Gorczyca, Beata
2011-01-01
The use of fractal dimensions to study the internal structure and settling of flocs formed in lime softening process was investigated. Fractal dimensions of flocs were measured directly on floc images and indirectly from their settling velocity. An optical microscope with a motorized stage was used to measure the fractal dimensions of lime softening flocs directly on their images in 2 and 3D space. The directly determined fractal dimensions of the lime softening flocs were 1.11-1.25 for floc boundary, 1.82-1.99 for cross-sectional area and 2.6-2.99 for floc volume. The fractal dimension determined indirectly from the flocs settling rates was 1.87 that was different from the 3D fractal dimension determined directly on floc images. This discrepancy is due to the following incorrect assumptions used for fractal dimensions determined from floc settling rates: linear relationship between square settling velocity and floc size (Stokes' Law), Euclidean relationship between floc size and volume, constant fractal dimensions and one primary particle size describing entire population of flocs. Floc settling model incorporating variable floc fractal dimensions as well as variable primary particle size was found to describe the settling velocity of large (>50 μm) lime softening flocs better than Stokes' Law. Settling velocities of smaller flocs (<50 μm) could still be quite well predicted by Stokes' Law. The variation of fractal dimensions with lime floc size in this study indicated that two mechanisms are involved in the formation of these flocs: cluster-cluster aggregation for small flocs (<50 μm) and diffusion-limited aggregation for large flocs (>50 μm). Therefore, the relationship between the floc fractal dimension and floc size appears to be determined by floc formation mechanisms.
Terahertz spectroscopy of plasmonic fractals.
Agrawal, A; Matsui, T; Zhu, W; Nahata, A; Vardeny, Z V
2009-03-20
We use terahertz time-domain spectroscopy to study the transmission properties of metallic films perforated with aperture arrays having deterministic or stochastic fractal morphologies ("plasmonic fractals"), and compare them with random aperture arrays. All of the measured plasmonic fractals show transmission resonances and antiresonances at frequencies that correspond to prominent features in their structure factors in k space. However, in sharp contrast to periodic aperture arrays, the resonant transmission enhancement decreases with increasing array size. This property is explained using a density-density correlation function, and is utilized for determining the underlying fractal dimensionality, D(<2). Furthermore, a sum rule for the transmission resonances and antiresonances in plasmonic fractals relative to the transmission of the corresponding random aperture arrays is obtained, and is shown to be universal.
Fractals analysis of cardiac arrhythmias.
Saeed, Mohammed
2005-09-06
Heart rhythms are generated by complex self-regulating systems governed by the laws of chaos. Consequently, heart rhythms have fractal organization, characterized by self-similar dynamics with long-range order operating over multiple time scales. This allows for the self-organization and adaptability of heart rhythms under stress. Breakdown of this fractal organization into excessive order or uncorrelated randomness leads to a less-adaptable system, characteristic of aging and disease. With the tools of nonlinear dynamics, this fractal breakdown can be quantified with potential applications to diagnostic and prognostic clinical assessment. In this paper, I review the methodologies for fractal analysis of cardiac rhythms and the current literature on their applications in the clinical context. A brief overview of the basic mathematics of fractals is also included. Furthermore, I illustrate the usefulness of these powerful tools to clinical medicine by describing a novel noninvasive technique to monitor drug therapy in atrial fibrillation.
Fast Basins and Branched Fractal Manifolds of Attractors of Iterated Function Systems
NASA Astrophysics Data System (ADS)
Barnsley, Michael F.; Vince, Andrew
2015-10-01
The fast basin of an attractor of an iterated function system (IFS) is the set of points in the domain of the IFS whose orbits under the associated semigroup intersect the attractor. Fast basins can have non-integer dimension and comprise a class of deterministic fractal sets. The relationship between the basin and the fast basin of a point-fibred attractor is analyzed. To better understand the topology and geometry of fast basins, and because of analogies with analytic continuation, branched fractal manifolds are introduced. A branched fractal manifold is a metric space constructed from the extended code space of a point-fibred attractor, by identifying some addresses. Typically, a branched fractal manifold is a union of a nondenumerable collection of nonhomeomorphic objects, isometric copies of generalized fractal blowups of the attractor.
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong
2016-05-01
The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.
Sparse coding based dense feature representation model for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang
2015-11-01
We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.
Color-Coded Imaging of Breast Cancer Metastatic Niche Formation in Nude Mice.
Suetsugu, Atsushi; Momiyama, Masashi; Hiroshima, Yukihiko; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M
2015-12-01
We report here a color-coded imaging model in which metastatic niches in the lung and liver of breast cancer can be identified. The transgenic green fluorescent protein (GFP)-expressing nude mouse was used as the host. The GFP nude mouse expresses GFP in all organs. However, GFP expression is dim in the liver parenchymal cells. Mouse mammary tumor cells (MMT 060562) (MMT), expressing red fluorescent protein (RFP), were injected in the tail vein of GFP nude mice to produce experimental lung metastasis and in the spleen of GFP nude mice to establish a liver metastasis model. Niche formation in the lung and liver metastasis was observed using very high resolution imaging systems. In the lung, GFP host-mouse cells accumulated around as few as a single MMT-RFP cell. In addition, GFP host cells were observed to form circle-shaped niches in the lung even without RFP cancer cells, which was possibly a niche in which future metastasis could be formed. In the liver, as with the lung, GFP host cells could form circle-shaped niches. Liver and lung metastases were removed surgically and cultured in vitro. MMT-RFP cells and GFP host cells resembling cancer-associated fibroblasts (CAFs) were observed interacting, suggesting that CAFs could serve as a metastatic niche.
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Dittmann, Jana
2007-02-01
Annotation watermarking (sometimes also called caption or illustration watermarking) denotes a specific application of watermarks, which embeds supplementary information directly in the media, so that additional information is intrinsically linked to media content and does not get separated from the media by non-malicious processing steps such as image cropping or compression. Recently, nested object annotation watermarking (NOAWM) has been introduced as a specialized annotation watermarking domain, whereby hierarchical object information is embedded in photographic images. In earlier work, the Hierarchical Graph Concept (HGC) has been suggested as a first approach to model object relations, which are defined by users during editing processes, into a hierarchical tree structure. The original HGC method uses a code-book decomposition of the annotation tree and a block-luminance algorithm for embedding. In this article, two new approaches for embedding nested object annotations are presented and experimentally compared to the original HGC approach. The first one adopts the code-book scheme of HGC using an alternative embedding based on Wet Paper Codes in blue-channel LSB domain, whereas the second suggests a new method based on the concept of intrinsic signal inheritance by sub-band energy and phase modulation of image luminance blocks. A comparative experimental evaluation based on more than 100 test images is presented in the paper, whereby aspects of transparency and robustness with respect to the most relevant image modifications to annotations, cropping and JPEG compression, are discussed comparatively for the two code-book schemes and the novel inheritance approach.
Coding visual images of objects in the inferotemporal cortex of the macaque monkey.
Tanaka, K; Saito, H; Fukada, Y; Moriya, M
1991-07-01
1. The inferotemporal cortex (IT) has been thought to play an essential and specific role in visual object discrimination and recognition, because a lesion of IT in the monkey results in a specific deficit in learning tasks that require these visual functions. To understand the cellular basis of the object discrimination and recognition processes in IT, we determined the optimal stimulus of individual IT cells in anesthetized, immobilized monkeys. 2. In the posterior one-third or one-fourth of IT, most cells could be activated maximally by bars or disks just by adjusting the size, orientation, or color of the stimulus. 3. In the remaining anterior two-thirds or three-quarters of IT, most cells required more complex features for their maximal activation. 4. The critical feature for the activation of individual anterior IT cells varied from cell to cell: a complex shape in some cells and a combination of texture or color with contour-shape in other cells. 5. Cells that showed different types of complexity for the critical feature were intermingled throughout anterior IT, whereas cells recorded in single penetrations showed critical features that were related in some respects. 6. Generally speaking, the critical features of anterior IT cells were moderately complex and can be thought of as partial features common to images of several different natural objects. The selectivity to the optimal stimulus was rather sharp, although not absolute. We thus propose that, in anterior IT, images of objects are coded by combinations of active cells, each of which represents the presence of a particular partial feature in the image.