Sample records for colour image segmentation

  1. Colour application on mammography image segmentation

    NASA Astrophysics Data System (ADS)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  2. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  3. Colour segmentation of multi variants tuberculosis sputum images using self organizing map

    NASA Astrophysics Data System (ADS)

    Rulaningtyas, Riries; Suksmono, Andriyan B.; Mengko, Tati L. R.; Saptawati, Putri

    2017-05-01

    Lung tuberculosis detection is still identified from Ziehl-Neelsen sputum smear images in low and middle countries. The clinicians decide the grade of this disease by counting manually the amount of tuberculosis bacilli. It is very tedious for clinicians with a lot number of patient and without standardization for sputum staining. The tuberculosis sputum images have multi variant characterizations in colour, because of no standardization in staining. The sputum has more variants colour and they are difficult to be identified. For helping the clinicians, this research examined the Self Organizing Map method for colouring image segmentation in sputum images based on colour clustering. This method has better performance than k-means clustering which also tried in this research. The Self Organizing Map could segment the sputum images with y good result and cluster the colours adaptively.

  4. Study of Colour Model for Segmenting Mycobacterium Tuberculosis in Sputum Images

    NASA Astrophysics Data System (ADS)

    Kurniawardhani, A.; Kurniawan, R.; Muhimmah, I.; Kusumadewi, S.

    2018-03-01

    One of method to diagnose Tuberculosis (TB) disease is sputum test. The presence and number of Mycobacterium tuberculosis (MTB) in sputum are identified. The presence of MTB can be seen under light microscope. Before investigating through stained light microscope, the sputum samples are stained using Ziehl-Neelsen (ZN) stain technique. Because there is no standard procedure in staining, the appearance of sputum samples may vary either in background colour or contrast level. It increases the difficulty in segmentation stage of automatic MTB identification. Thus, this study investigated the colour models to look for colour channels of colour model that can segment MTB well in different stained conditions. The colour models will be investigated are each channel in RGB, HSV, CIELAB, YCbCr, and C-Y colour model and the clustering algorithm used is k-Means. The sputum image dataset used in this study is obtained from community health clinic in a district in Indonesia. The size of each image was set to 1600x1200 pixels which is having variation in number of MTB, background colour, and contrast level. The experiment result indicates that in all image conditions, blue, hue, Cr, and Ry colour channel can be used to segment MTB in one cluster well.

  5. SVM Pixel Classification on Colour Image Segmentation

    NASA Astrophysics Data System (ADS)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  6. Colour image compression by grey to colour conversion

    NASA Astrophysics Data System (ADS)

    Drew, Mark S.; Finlayson, Graham D.; Jindal, Abhilash

    2011-03-01

    Instead of de-correlating image luminance from chrominance, some use has been made of using the correlation between the luminance component of an image and its chromatic components, or the correlation between colour components, for colour image compression. In one approach, the Green colour channel was taken as a base, and the other colour channels or their DCT subbands were approximated as polynomial functions of the base inside image windows. This paper points out that we can do better if we introduce an addressing scheme into the image description such that similar colours are grouped together spatially. With a Luminance component base, we test several colour spaces and rearrangement schemes, including segmentation. and settle on a log-geometric-mean colour space. Along with PSNR versus bits-per-pixel, we found that spatially-keyed s-CIELAB colour error better identifies problem regions. Instead of segmentation, we found that rearranging on sorted chromatic components has almost equal performance and better compression. Here, we sort on each of the chromatic components and separately encode windows of each. The result consists of the original greyscale plane plus the polynomial coefficients of windows of rearranged chromatic values, which are then quantized. The simplicity of the method produces a fast and simple scheme for colour image and video compression, with excellent results.

  7. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    PubMed

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  8. Hand motion segmentation against skin colour background in breast awareness applications.

    PubMed

    Hu, Yuqin; Naguib, Raouf N G; Todman, Alison G; Amin, Saad A; Al-Omishy, Hassanein; Oikonomou, Andreas; Tucker, Nick

    2004-01-01

    Skin colour modelling and classification play significant roles in face and hand detection, recognition and tracking. A hand is an essential tool used in breast self-examination, which needs to be detected and analysed during the process of breast palpation. However, the background of a woman's moving hand is her breast that has the same or similar colour as the hand. Additionally, colour images recorded by a web camera are strongly affected by the lighting or brightness conditions. Hence, it is a challenging task to segment and track the hand against the breast without utilising any artificial markers, such as coloured nail polish. In this paper, a two-dimensional Gaussian skin colour model is employed in a particular way to identify a breast but not a hand. First, an input image is transformed to YCbCr colour space, which is less sensitive to the lighting conditions and more tolerant of skin tone. The breast, thus detected by the Gaussian skin model, is used as the baseline or framework for the hand motion. Secondly, motion cues are used to segment the hand motion against the detected baseline. Desired segmentation results have been achieved and the robustness of this algorithm is demonstrated in this paper.

  9. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  10. Efficient detection of wound-bed and peripheral skin with statistical colour models.

    PubMed

    Veredas, Francisco J; Mesa, Héctor; Morente, Laura

    2015-04-01

    A pressure ulcer is a clinical pathology of localised damage to the skin and underlying tissue caused by pressure, shear or friction. Reliable diagnosis supported by precise wound evaluation is crucial in order to success on treatment decisions. This paper presents a computer-vision approach to wound-area detection based on statistical colour models. Starting with a training set consisting of 113 real wound images, colour histogram models are created for four different tissue types. Back-projections of colour pixels on those histogram models are used, from a Bayesian perspective, to get an estimate of the posterior probability of a pixel to belong to any of those tissue classes. Performance measures obtained from contingency tables based on a gold standard of segmented images supplied by experts have been used for model selection. The resulting fitted model has been validated on a training set consisting of 322 wound images manually segmented and labelled by expert clinicians. The final fitted segmentation model shows robustness and gives high mean performance rates [(AUC: .9426 (SD .0563); accuracy: .8777 (SD .0799); F-score: 0.7389 (SD .1550); Cohen's kappa: .6585 (SD .1787)] when segmenting significant wound areas that include healing tissues.

  11. Adaptive skin segmentation via feature-based face detection

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Morris, Tim

    2014-05-01

    Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.

  12. A coloured oil level indicator detection method based on simple linear iterative clustering

    NASA Astrophysics Data System (ADS)

    Liu, Tianli; Li, Dongsong; Jiao, Zhiming; Liang, Tao; Zhou, Hao; Yang, Guoqing

    2017-12-01

    A detection method of coloured oil level indicator is put forward. The method is applied to inspection robot in substation, which realized the automatic inspection and recognition of oil level indicator. Firstly, the detected image of the oil level indicator is collected, and the detected image is clustered and segmented to obtain the label matrix of the image. Secondly, the detection image is processed by colour space transformation, and the feature matrix of the image is obtained. Finally, the label matrix and feature matrix are used to locate and segment the detected image, and the upper edge of the recognized region is obtained. If the upper limb line exceeds the preset oil level threshold, the alarm will alert the station staff. Through the above-mentioned image processing, the inspection robot can independently recognize the oil level of the oil level indicator, and instead of manual inspection. It embodies the automatic and intelligent level of unattended operation.

  13. Automated measurement of pressure injury through image processing.

    PubMed

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.

  14. Intrinsic melanin and hemoglobin colour components for skin lesion malignancy detection.

    PubMed

    Madooei, Ali; Drew, Mark S; Sadeghi, Maryam; Atkins, M Stella

    2012-01-01

    In this paper we propose a new log-chromaticity 2-D colour space, an extension of previous approaches, which succeeds in removing confounding factors from dermoscopic images: (i) the effects of the particular camera characteristics for the camera system used in forming RGB images; (ii) the colour of the light used in the dermoscope; (iii) shading induced by imaging non-flat skin surfaces; (iv) and light intensity, removing the effect of light-intensity falloff toward the edges of the dermoscopic image. In the context of a blind source separation of the underlying colour, we arrive at intrinsic melanin and hemoglobin images, whose properties are then used in supervised learning to achieve excellent malignant vs. benign skin lesion classification. In addition, we propose using the geometric-mean of colour for skin lesion segmentation based on simple grey-level thresholding, with results outperforming the state of the art.

  15. Colour space influence for vegetation image classification application to Caribbean forest and agriculture

    NASA Astrophysics Data System (ADS)

    Abadi, M.; Grandchamp, E.

    2008-10-01

    This paper deals with a comparison of different colour space in order to improve high resolution images classification. The background of this study is the measure of the agriculture impact on the environment in islander context. Biodiversity is particularly sensitive and relevant in such areas and the follow-up of the forest front is a way to ensure its preservation. Very high resolution satellite images are used such as QuickBird and IKONOS scenes. In order to segment the images into forest and agriculture areas, we characterize both ground covers with colour and texture features. A classical unsupervised classifier is then used to obtain labelled areas. As features are computed on coloured images, we can wonder if the colour space choice is relevant. This study has been made considering more than fourteen colour spaces (RGB, YUV, Lab, YIQ, YCrCs, XYZ, CMY, LMS, HSL, KLT, IHS, I1I2I3, HSV, HSI, etc.) and shows the visual and quantitative superiority of IHS on all others. For conciseness reasons, results only show RGB, I1I2I3 and IHS colour spaces.

  16. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin

    PubMed Central

    2013-01-01

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405

  17. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3'-Diaminobenzidine&Haematoxylin.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene

    2013-03-25

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.

  18. Pixel-based skin segmentation in psoriasis images.

    PubMed

    George, Y; Aldeen, M; Garnavi, R

    2016-08-01

    In this paper, we present a detailed comparison study of skin segmentation methods for psoriasis images. Different techniques are modified and then applied to a set of psoriasis images acquired from the Royal Melbourne Hospital, Melbourne, Australia, with aim of finding the best technique suited for application to psoriasis images. We investigate the effect of different colour transformations on skin detection performance. In this respect, explicit skin thresholding is evaluated with three different decision boundaries (CbCr, HS and rgHSV). Histogram-based Bayesian classifier is applied to extract skin probability maps (SPMs) for different colour channels. This is then followed by using different approaches to find a binary skin map (SM) image from the SPMs. The approaches used include binary decision tree (DT) and Otsu's thresholding. Finally, a set of morphological operations are implemented to refine the resulted SM image. The paper provides detailed analysis and comparison of the performance of the Bayesian classifier in five different colour spaces (YCbCr, HSV, RGB, XYZ and CIELab). The results show that histogram-based Bayesian classifier is more effective than explicit thresholding, when applied to psoriasis images. It is also found that decision boundary CbCr outperforms HS and rgHSV. Another finding is that the SPMs of Cb, Cr, H and B-CIELab colour bands yield the best SMs for psoriasis images. In this study, we used a set of 100 psoriasis images for training and testing the presented methods. True Positive (TP) and True Negative (TN) are used as statistical evaluation measures.

  19. Investigation of a novel image segmentation method dedicated to forest fire applications

    NASA Astrophysics Data System (ADS)

    Rudz, S.; Chetehouna, K.; Hafiane, A.; Laurent, H.; Séro-Guillaume, O.

    2013-07-01

    To face fire it is crucial to understand its behaviour in order to maximize fighting means. To achieve this task, the development of a metrological tool is necessary for estimating both geometrical and physical parameters involved in forest fire modelling. A key parameter is to estimate fire positions accurately. In this paper an image processing tool especially dedicated to an accurate extraction of fire from an image is presented. In this work, the clustering on several colour spaces is investigated and it appears that the blue chrominance Cb from the YCbCr colour space is the most appropriate. As a consequence, a new segmentation algorithm dedicated to forest fire applications has been built using first an optimized k-means clustering in the Cb-channel and then some properties of fire pixels in the RGB colour space. Next, the performance of the proposed method is evaluated using three supervised evaluation criteria and then compared to other existing segmentation algorithms in the literature. Finally a conclusion is drawn, assessing the good behaviour of the developed algorithm. This paper is dedicated to the memory of Dr Olivier Séro-Guillaume (1950-2013), CNRS Research Director.

  20. A dynamic fuzzy genetic algorithm for natural image segmentation using adaptive mean shift

    NASA Astrophysics Data System (ADS)

    Arfan Jaffar, M.

    2017-01-01

    In this paper, a colour image segmentation approach based on hybridisation of adaptive mean shift (AMS), fuzzy c-mean and genetic algorithms (GAs) is presented. Image segmentation is the perceptual faction of pixels based on some likeness measure. GA with fuzzy behaviour is adapted to maximise the fuzzy separation and minimise the global compactness among the clusters or segments in spatial fuzzy c-mean (sFCM). It adds diversity to the search process to find the global optima. A simple fusion method has been used to combine the clusters to overcome the problem of over segmentation. The results show that our technique outperforms state-of-the-art methods.

  1. Low-cost Assessment for Early Vigor and Canopy Cover Estimation in Durum Wheat Using RGB Images.

    NASA Astrophysics Data System (ADS)

    Fernandez-Gallego, J. A.; Kefauver, S. C.; Aparicio Gutiérrez, N.; Nieto-Taladriz, M. T.; Araus, J. L.

    2017-12-01

    Early vigor and canopy cover is an important agronomical component for determining grain yield in wheat. Estimates of the canopy cover area at early stages of the crop cycle may contribute to efficiency of crop management practices and breeding programs. Canopy-image segmentation is complicated in field conditions by numerous factors, including soil, shadows and unexpected objects, such as rocks, weeds, plant remains, or even part of the photographer's boots (many times it appears in the scene); and the algorithms must be robust to accommodate these conditions. Field trials were carried out in two sites (Aranjuez and Valladolid, Spain) during the 2016/2017 crop season. A set of 24 varieties of durum wheat in two growing conditions (rainfed and support irrigation) per site were used to create the image database. This work uses zenithal RGB images taken from above the crop in natural light conditions. The images were taken with Canon IXUS 320HS camera in Aranjuez, holding the camera by hand, and with a Nikon D300 camera in Valladolid, using a monopod. The algorithm for early vigor and canopy cover area estimation uses three main steps: (i) Image decorrelation (ii) Colour space transformation and (iii) Canopy cover segmentation using an automatic threshold based on the image histogram. The first step was chosen to enhance the visual interpretation and separate the pixel colors into the scene; the colour space transformation contributes to further separate the colours. Finally an automatic threshold using a minimum method allows for correct segmentation and quantification of the canopy pixels. The percent of area covered by the canopy was calculated using a simple algorithm for counting pixels in the final binary segmented image. The comparative results demonstrate the algorithm's effectiveness through significant correlations between early vigor and canopy cover estimation compared to NDVI (Normalized difference vegetation index) and grain yield.

  2. Smartphone-based quantitative measurements on holographic sensors.

    PubMed

    Khalili Moghaddam, Gita; Lowe, Christopher Robin

    2017-01-01

    The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI), i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals.

  3. Smartphone-based quantitative measurements on holographic sensors

    PubMed Central

    Khalili Moghaddam, Gita

    2017-01-01

    The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI), i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals. PMID:29141008

  4. Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin

    PubMed Central

    2014-01-01

    Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154

  5. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  6. Medical image segmentation to estimate HER2 gene status in breast cancer

    NASA Astrophysics Data System (ADS)

    Palacios-Navarro, Guillermo; Acirón-Pomar, José Manuel; Vilchez-Sorribas, Enrique; Zambrano, Eddie Galarza

    2016-02-01

    This work deals with the estimation of HER2 Gene status in breast tumour images treated with in situ hybridization techniques (ISH). We propose a simple algorithm to obtain the amplification factor of HER2 gene. The obtained results are very close to those obtained by specialists in a manual way. The developed algorithm is based on colour image segmentation and has been included in a software application tool for breast tumour analysis. The developed tool focus on the estimation of the seriousness of tumours, facilitating the work of pathologists and contributing to a better diagnosis.

  7. The geometrical precision of virtual bone models derived from clinical computed tomography data for forensic anthropology.

    PubMed

    Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J

    2017-07-01

    Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.

  8. Graph-based surface reconstruction from stereo pairs using image segmentation

    NASA Astrophysics Data System (ADS)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  9. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  10. Automatic layer segmentation of H&E microscopic images of mice skin

    NASA Astrophysics Data System (ADS)

    Hussein, Saif; Selway, Joanne; Jassim, Sabah; Al-Assam, Hisham

    2016-05-01

    Mammalian skin is a complex organ composed of a variety of cells and tissue types. The automatic detection and quantification of changes in skin structures has a wide range of applications for biological research. To accurately segment and quantify nuclei, sebaceous gland, hair follicles, and other skin structures, there is a need for a reliable segmentation of different skin layers. This paper presents an efficient segmentation algorithm to segment the three main layers of mice skin, namely epidermis, dermis, and subcutaneous layers. It also segments the epidermis layer into two sub layers, basal and cornified layers. The proposed algorithm uses adaptive colour deconvolution technique on H&E stain images to separate different tissue structures, inter-modes and Otsu thresholding techniques were effectively combined to segment the layers. It then uses a set of morphological and logical operations on each layer to removing unwanted objects. A dataset of 7000 H&E microscopic images of mutant and wild type mice were used to evaluate the effectiveness of the algorithm. Experimental results examined by domain experts have confirmed the viability of the proposed algorithms.

  11. Non-contrast-enhanced imaging of haemodialysis fistulas using quiescent-interval single-shot (QISS) MRA: a feasibility study.

    PubMed

    Okur, A; Kantarci, M; Karaca, L; Yildiz, S; Sade, R; Pirimoglu, B; Keles, M; Avci, A; Çankaya, E; Schmitt, P

    2016-03-01

    To assess the efficiency of a novel quiescent-interval single-shot (QISS) technique for non-contrast-enhanced magnetic resonance angiography (MRA) of haemodialysis fistulas. QISS MRA and colour Doppler ultrasound (CDU) images were obtained from 22 haemodialysis patients with end-stage renal disease (ESRD). A radiologist with extensive experience in vascular imaging initially assessed the fistulas using CDU. Two observers analysed each QISS MRA data set in terms of image quality, using a five-point scale ranging from 0 (non-diagnostic) to 4 (excellent), and lumen diameters of all segments were measured. One hundred vascular segments were analysed for QISS MRA. Two anastomosis segments were considered non-diagnostic. None of the arterial or venous segments were evaluated as non-diagnostic. The image quality was poorer for the anastomosis level compared to the other segments (p<0.001 for arterial segments, and p<0.05 for venous segments), while no significant difference was determined for other vascular segments. QISS MRA has the potential to provide valuable complementary information to CDU regarding the imaging of haemodialysis fistulas. In addition, QISS non-enhanced MRA represents an alternative for assessment of haemodialysis fistulas, in which the administration of iodinated or gadolinium-based contrast agents is contraindicated. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  12. Segmentation of optic disc and optic cup in retinal fundus images using shape regression.

    PubMed

    Sedai, Suman; Roy, Pallab K; Mahapatra, Dwarikanath; Garnavi, Rahil

    2016-08-01

    Glaucoma is one of the leading cause of blindness. The manual examination of optic cup and disc is a standard procedure used for detecting glaucoma. This paper presents a fully automatic regression based method which accurately segments optic cup and disc in retinal colour fundus image. First, we roughly segment optic disc using circular hough transform. The approximated optic disc is then used to compute the initial optic disc and cup shapes. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape of the optic cup and disc from a given initial shape. Gradient boosted regression trees are employed to learn each regressor in the cascade. A novel data augmentation approach is proposed to improve the regressors performance by generating synthetic training data. The proposed optic cup and disc segmentation method is applied on an image set of 50 patients and demonstrate high segmentation accuracy for optic cup and disc with dice metric of 0.95 and 0.85 respectively. Comparative study shows that our proposed method outperforms state of the art optic cup and disc segmentation methods.

  13. Automatic CDR Estimation for Early Glaucoma Diagnosis

    PubMed Central

    Sarmiento, A.; Sanchez-Morillo, D.; Jiménez, S.; Alemany, P.

    2017-01-01

    Glaucoma is a degenerative disease that constitutes the second cause of blindness in developed countries. Although it cannot be cured, its progression can be prevented through early diagnosis. In this paper, we propose a new algorithm for automatic glaucoma diagnosis based on retinal colour images. We focus on capturing the inherent colour changes of optic disc (OD) and cup borders by computing several colour derivatives in CIE L∗a∗b∗ colour space with CIE94 colour distance. In addition, we consider spatial information retaining these colour derivatives and the original CIE L∗a∗b∗ values of the pixel and adding other characteristics such as its distance to the OD centre. The proposed strategy is robust due to a simple structure that does not need neither initial segmentation nor removal of the vascular tree or detection of vessel bends. The method has been extensively validated with two datasets (one public and one private), each one comprising 60 images of high variability of appearances. Achieved class-wise-averaged accuracy of 95.02% and 81.19% demonstrates that this automated approach could support physicians in the diagnosis of glaucoma in its early stage, and therefore, it could be seen as an opportunity for developing low-cost solutions for mass screening programs. PMID:29279773

  14. HRSCview: a web-based data exploration system for the Mars Express HRSC instrument

    NASA Astrophysics Data System (ADS)

    Michael, G.; Walter, S.; Neukum, G.

    2007-08-01

    The High Resolution Stereo Camera (HRSC) on the ESA Mars Express spacecraft has been orbiting Mars since January 2004. By spring 2007 it had returned around 2 terabytes of image data, covering around 35% of the Martian surface in stereo and colour at a resolu-tion of 10-20 m/pixel. HRSCview provides a rapid means to explore these images up to their full resolu-tion with the data-subsetting, sub-sampling, stretching and compositing being carried out on-the-fly by the image server. It is a joint website of the Free University of Berlin and the German Aerospace Center (DLR). The system operates by on-the-fly processing of the six HRSC level-4 image products: the map-projected ortho-rectified nadir pan-chromatic and four colour channels, and the stereo-derived DTM (digital terrain model). The user generates a request via the web-page for an image with several parameters: the centre of the view in surface coordinates, the image resolution in metres/pixel, the image dimensions, and one of several colour modes. If there is HRSC coverage at the given location, the necessary segments are extracted from the full orbit images, resampled to the required resolution, and composited according to the user's choice. In all modes the nadir channel, which has the highest resolu-tion, is included in the composite so that the maximum detail is always retained. The images are stretched ac-cording to the current view: this applies to the eleva-tion colour scale, as well as the nadir brightness and the colour channels. There are modes for raw colour, stretched colour, enhanced colour (exaggerated colour differences), and a synthetic 'Mars-like' colour stretch. A colour ratio mode is given as an alternative way to examine colour differences (R=IR/R, G=R/G and B=G/B). The final image is packaged as a JPEG file and returned to the user over the web. Each request requires approximately 1 second to process. A link is provided from each view to a data product page, where header items describing the full map-projected science data product are displayed, and a direct link to the archived data products on the ESA Planetary Science Archive (PSA) is provided. At pre-sent the majority of the elevation composites are de-rived from the HRSC Preliminary 200m DTMs gener-ated at the German Aerospace Center (DLR), which will not be available as separately downloadable data products. These DTMs are being progressively super-seded by systematically generated higher resolution archival DTMs, also from DLR, which will become available for download through the PSA, and be simi-larly accessible via HRSCview. At the time of writing this abstract (May 2007), four such high resolution DTMs are available for download via the HRSCview data product pages (for images from orbits 0572, 0905, 1004, and 2039).

  15. Texture variations suppress suprathreshold brightness and colour variations.

    PubMed

    Schofield, Andrew J; Kingdom, Frederick A A

    2014-01-01

    Discriminating material changes from illumination changes is a key function of early vision. Luminance cues are ambiguous in this regard, but can be disambiguated by co-incident changes in colour and texture. Thus, colour and texture are likely to be given greater prominence than luminance for object segmentation, and better segmentation should in turn produce stronger grouping. We sought to measure the relative strengths of combined luminance, colour and texture contrast using a suprathreshhold, psychophysical grouping task. Stimuli comprised diagonal grids of circular patches bordered by a thin black line and contained combinations of luminance decrements with either violet, red, or texture increments. There were two tasks. In the Separate task the different cues were presented separately in a two-interval design, and participants indicated which interval contained the stronger orientation structure. In the Combined task the cues were combined to produce competing orientation structure in a single image. Participants had to indicate which orientation, and therefore which cue was dominant. Thus we established the relative grouping strength of each cue pair presented separately, and compared this to their relative grouping strength when combined. In this way we observed suprathreshold interactions between cues and were able to assess cue dominance at ecologically relevant signal levels. Participants required significantly more luminance and colour compared to texture contrast in the Combined compared to Separate conditions (contrast ratios differed by about 0.1 log units), showing that suprathreshold texture dominates colour and luminance when the different cues are presented in combination.

  16. The Cyborg Astrobiologist: testing a novelty detection algorithm on two mobile exploration systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.

    2010-01-01

    In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.

  17. Shadow Detection from Very High Resoluton Satellite Image Using Grabcut Segmentation and Ratio-Band Algorithms

    NASA Astrophysics Data System (ADS)

    Kadhim, N. M. S. M.; Mourshed, M.; Bray, M. T.

    2015-03-01

    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of the ratio algorithm. The differences in the characteristics of the two satellite imageries in terms of spatial and spectral resolution can play an important role in the estimation and detection of the shadow of urban objects.

  18. Detection of Hard Exudates in Colour Fundus Images Using Fuzzy Support Vector Machine-Based Expert System.

    PubMed

    Jaya, T; Dheeba, J; Singh, N Albert

    2015-12-01

    Diabetic retinopathy is a major cause of vision loss in diabetic patients. Currently, there is a need for making decisions using intelligent computer algorithms when screening a large volume of data. This paper presents an expert decision-making system designed using a fuzzy support vector machine (FSVM) classifier to detect hard exudates in fundus images. The optic discs in the colour fundus images are segmented to avoid false alarms using morphological operations and based on circular Hough transform. To discriminate between the exudates and the non-exudates pixels, colour and texture features are extracted from the images. These features are given as input to the FSVM classifier. The classifier analysed 200 retinal images collected from diabetic retinopathy screening programmes. The tests made on the retinal images show that the proposed detection system has better discriminating power than the conventional support vector machine. With the best combination of FSVM and features sets, the area under the receiver operating characteristic curve reached 0.9606, which corresponds to a sensitivity of 94.1% with a specificity of 90.0%. The results suggest that detecting hard exudates using FSVM contribute to computer-assisted detection of diabetic retinopathy and as a decision support system for ophthalmologists.

  19. Running the figure to the ground: figure-ground segmentation during visual search.

    PubMed

    Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel

    2014-04-01

    We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Colour homogeneity and visual perception of age, health and attractiveness of male facial skin.

    PubMed

    Fink, B; Matts, P J; D'Emiliano, D; Bunse, L; Weege, B; Röder, S

    2012-12-01

    Visible facial skin condition in females is known to affect perception of age, health and attractiveness. Skin colour distribution in shape- and topography-standardized female faces, driven by localized melanin and haemoglobin, can account for up to twenty years of apparent age perception. Although this is corroborated by an ability to discern female age even in isolated, non-contextual skin images, a similar effect in the perception of male skin is yet to be demonstrated. To investigate the effect of skin colour homogeneity and chromophore distribution on the visual perception of age, health and attractiveness of male facial skin. Cropped images from the cheeks of facial images of 160 Caucasian British men aged 10-70 years were blind-rated for age, health and attractiveness by a total of 308 participants. In addition, the homogeneity of skin images and corresponding eumelanin/oxyhaemoglobin concentration maps were analysed objectively using Haralick's image segmentation algorithm. Isolated skin images taken from the cheeks of younger males were judged as healthier and more attractive. Perception of age, health and attractiveness was strongly related to melanin and haemoglobin distribution, whereby more even distributions led to perception of younger age and greater health and attractiveness. The evenness of melanized features was a stronger cue for age perception, whereas haemoglobin distribution was associated more strongly with health and attractiveness perception. Male skin colour homogeneity, driven by melanin and haemoglobin distribution, influences perception of age, health and attractiveness. © 2011 The Authors. Journal of the European Academy of Dermatology and Venereology © 2011 European Academy of Dermatology and Venereology.

  1. NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization

    PubMed Central

    Parraga, C. Alejandro; Akbarinia, Arash

    2016-01-01

    The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms. PMID:26954691

  2. NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization.

    PubMed

    Parraga, C Alejandro; Akbarinia, Arash

    2016-01-01

    The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms.

  3. Perceived visual speed constrained by image segmentation

    NASA Technical Reports Server (NTRS)

    Verghese, P.; Stone, L. S.

    1996-01-01

    Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.

  4. Automatic system for detecting pornographic images

    NASA Astrophysics Data System (ADS)

    Ho, Kevin I. C.; Chen, Tung-Shou; Ho, Jun-Der

    2002-09-01

    Due to the dramatic growth of network and multimedia technology, people can more easily get variant information by using Internet. Unfortunately, it also makes the diffusion of illegal and harmful content much easier. So, it becomes an important topic for the Internet society to protect and safeguard Internet users from these content that may be encountered while surfing on the Net, especially children. Among these content, porno graphs cause more serious harm. Therefore, in this study, we propose an automatic system to detect still colour porno graphs. Starting from this result, we plan to develop an automatic system to search porno graphs or to filter porno graphs. Almost all the porno graphs possess one common characteristic that is the ratio of the size of skin region and non-skin region is high. Based on this characteristic, our system first converts the colour space from RGB colour space to HSV colour space so as to segment all the possible skin-colour regions from scene background. We also apply the texture analysis on the selected skin-colour regions to separate the skin regions from non-skin regions. Then, we try to group the adjacent pixels located in skin regions. If the ratio is over a given threshold, we can tell if the given image is a possible porno graph. Based on our experiment, less than 10% of non-porno graphs are classified as pornography, and over 80% of the most harmful porno graphs are classified correctly.

  5. Cherry recognition in natural environment based on the vision of picking robot

    NASA Astrophysics Data System (ADS)

    Zhang, Qirong; Chen, Shanxiong; Yu, Tingzhong; Wang, Yan

    2017-04-01

    In order to realize the automatic recognition of cherry in the natural environment, this paper designed a robot vision system recognition method. The first step of this method is to pre-process the cherry image by median filtering. The second step is to identify the colour of the cherry through the 0.9R-G colour difference formula, and then use the Otsu algorithm for threshold segmentation. The third step is to remove noise by using the area threshold. The fourth step is to remove the holes in the cherry image by morphological closed and open operation. The fifth step is to obtain the centroid and contour of cherry by using the smallest external rectangular and the Hough transform. Through this recognition process, we can successfully identify 96% of the cherry without blocking and adhesion.

  6. Automatic detection of the hippocampal region associated with Alzheimer's disease from microscopic images of mice brain

    NASA Astrophysics Data System (ADS)

    Albaidhani, Tahseen; Hawkes, Cheryl; Jassim, Sabah; Al-Assam, Hisham

    2016-05-01

    The hippocampus is the region of the brain that is primarily associated with memory and spatial navigation. It is one of the first brain regions to be damaged when a person suffers from Alzheimer's disease. Recent research in this field has focussed on the assessment of damage to different blood vessels within the hippocampal region from a high throughput brain microscopic images. The ultimate aim of our research is the creation of an automatic system to count and classify different blood vessels such as capillaries, veins, and arteries in the hippocampus region. This work should provide biologists with efficient and accurate tools in their investigation of the causes of Alzheimer's disease. Locating the boundary of the Region of Interest in the hippocampus from microscopic images of mice brain is the first essential stage towards developing such a system. This task benefits from the variation in colour channels and texture between the two sides of the hippocampus and the boundary region. Accordingly, the developed initial step of our research to locating the hippocampus edge uses a colour-based segmentation of the brain image followed by Hough transforms on the colour channel that isolate the hippocampus region. The output is then used to split the brain image into two sides of the detected section of the boundary: the inside region and the outside region. Experimental results on a sufficiently number of microscopic images demonstrate the effectiveness of the developed solution.

  7. Automated detection of diabetic retinopathy on digital fundus images.

    PubMed

    Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D

    2002-02-01

    The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.

  8. Prenatal diagnosis of placenta accreta by colour Doppler ultrasonography: 5-year review.

    PubMed

    Pongrojpaw, Densak; Chanthasenanont, Athita; Nanthakomon, Tongta; Suwannarurk, Komsun

    2014-08-01

    To determine the accuracy of colour Doppler ultrasonography to diagnose placenta accreta. The authors reviewed cases of placenta accreta between January, 2008 and December, 2012. Ultrasonographic images consistent with signs ofplacenta accreta (numerous vascular lacunae, loss ofsubplacentalsonolucent space, absent lower uterine segment between bladder-placenta, turbulent or complicated blood flow at the uteroplacental interface) were correlated with findings at the time of surgery and pathologic examination. Over 60 months, 12 cases (0.48/1,000 deliveries) with suspected placenta accreta by ultrasonography were studied. The median gestational age atfirst diagnosis was 24 weeks. All cases had at least one previous cesarean delivery. At surgery, all cases had an adherent placenta requiring hysterectomy (five accreta, three increta, andfourpercreta). Four cases (33%) had accidental tear of urinary bladder Nine cases (75%) required blood transfusions. Colour Doppler ultrasonography appears useful in antenatal diagnosis ofplacenta accreta.

  9. First Steps to Automated Interior Reconstruction from Semantically Enriched Point Clouds and Imagery

    NASA Astrophysics Data System (ADS)

    Obrock, L. S.; Gülch, E.

    2018-05-01

    The automated generation of a BIM-Model from sensor data is a huge challenge for the modeling of existing buildings. Currently the measurements and analyses are time consuming, allow little automation and require expensive equipment. We do lack an automated acquisition of semantical information of objects in a building. We are presenting first results of our approach based on imagery and derived products aiming at a more automated modeling of interior for a BIM building model. We examine the building parts and objects visible in the collected images using Deep Learning Methods based on Convolutional Neural Networks. For localization and classification of building parts we apply the FCN8s-Model for pixel-wise Semantic Segmentation. We, so far, reach a Pixel Accuracy of 77.2 % and a mean Intersection over Union of 44.2 %. We finally use the network for further reasoning on the images of the interior room. We combine the segmented images with the original images and use photogrammetric methods to produce a three-dimensional point cloud. We code the extracted object types as colours of the 3D-points. We thus are able to uniquely classify the points in three-dimensional space. We preliminary investigate a simple extraction method for colour and material of building parts. It is shown, that the combined images are very well suited to further extract more semantic information for the BIM-Model. With the presented methods we see a sound basis for further automation of acquisition and modeling of semantic and geometric information of interior rooms for a BIM-Model.

  10. Image analysis for material characterisation

    NASA Astrophysics Data System (ADS)

    Livens, Stefan

    In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.

  11. Can colours be used to segment words when reading?

    PubMed

    Perea, Manuel; Tejero, Pilar; Winskel, Heather

    2015-07-01

    Rayner, Fischer, and Pollatsek (1998, Vision Research) demonstrated that reading unspaced text in Indo-European languages produces a substantial reading cost in word identification (as deduced from an increased word-frequency effect on target words embedded in the unspaced vs. spaced sentences) and in eye movement guidance (as deduced from landing sites closer to the beginning of the words in unspaced sentences). However, the addition of spaces between words comes with a cost: nearby words may fall outside high-acuity central vision, thus reducing the potential benefits of parafoveal processing. In the present experiment, we introduced a salient visual cue intended to facilitate the process of word segmentation without compromising visual acuity: each alternating word was printed in a different colour (i.e., ). Results only revealed a small reading cost of unspaced alternating colour sentences relative to the spaced sentences. Thus, present data are a demonstration that colour can be useful to segment words for readers of spaced orthographies. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Recognising the forest, but not the trees: an effect of colour on scene perception and recognition.

    PubMed

    Nijboer, Tanja C W; Kanai, Ryota; de Haan, Edward H F; van der Smagt, Maarten J

    2008-09-01

    Colour has been shown to facilitate the recognition of scene images, but only when these images contain natural scenes, for which colour is 'diagnostic'. Here we investigate whether colour can also facilitate memory for scene images, and whether this would hold for natural scenes in particular. In the first experiment participants first studied a set of colour and greyscale natural and man-made scene images. Next, the same images were presented, randomly mixed with a different set. Participants were asked to indicate whether they had seen the images during the study phase. Surprisingly, performance was better for greyscale than for coloured images, and this difference is due to the higher false alarm rate for both natural and man-made coloured scenes. We hypothesized that this increase in false alarm rate was due to a shift from scrutinizing details of the image to recognition of the gist of the (coloured) image. A second experiment, utilizing images without a nameable gist, confirmed this hypothesis as participants now performed equally on greyscale and coloured images. In the final experiment we specifically targeted the more detail-based perception and recognition for greyscale images versus the more gist-based perception and recognition for coloured images with a change detection paradigm. The results show that changes to images are detected faster when image-pairs were presented in greyscale than in colour. This counterintuitive result held for both natural and man-made scenes (but not for scenes without nameable gist) and thus corroborates the shift from more detailed processing of images in greyscale to more gist-based processing of coloured images.

  13. The elimination of colour blocks in remote sensing images in VR

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Li, Guohui; Su, Zhenyu

    2018-02-01

    Aiming at the characteristics in HSI colour space of remote sensing images at different time in VR, a unified colour algorithm is proposed. First the method converted original image from RGB colour space to HSI colour space. Then, based on the invariance of the hue before and after the colour adjustment in the HSI colour space and the brightness translational features of the image after the colour adjustment, establish the linear model which satisfied these characteristics of the image. And then determine the range of the parameters in the model. Finally, according to the established colour adjustment model, the experimental verification is carried out. The experimental results show the proposed model can effectively recover the clear image, and the algorithm is faster. The experimental results show the proposed algorithm can effectively enhance the image clarity and can solve the pigment block problem well.

  14. Wear Detection of Drill Bit by Image-based Technique

    NASA Astrophysics Data System (ADS)

    Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul

    2018-03-01

    Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.

  15. A colour image reproduction framework for 3D colour printing

    NASA Astrophysics Data System (ADS)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  16. How do patients with neglect see a horizontal line? Analysis of performances in coloured line bisection task.

    PubMed

    Misonou, Kaori; Ishiai, Sumio; Seki, Keiko; Koyama, Yasumasa; Nakano, Naomi

    2004-06-01

    Twelve patients with left unilateral spatial neglect were examined with a newly devised "coloured line bisection task". They were presented with a horizontal line printed in blue on one side and in red on the other side; the proportions of the blue and red segments were varied. Immediately after placement of the subjective midpoint, the line was concealed and the patients were asked to name the colours of the right and left ends. Five patients who identified the left-end colour almost correctly had no visual field defect, while the other seven whose colour naming was impaired on the left side had left visual field defect. The rightward bisection errors were similarly distributed in the fair and poor colour-naming patients except for two patients from the latter group. The lesions of the fair colour-naming patients spared the lingual and fusiform gyri, which are known to be engaged in colour processing. Patients with neglect whose visual field is preserved may neglect the leftward extension of a line but not the colour in the neglected space. The poor colour-naming patients frequently failed to name the left-end colour that appeared to the left of their subjective midpoint, which indicates that they hardly searched leftward beyond that point. In such trials, they reported that the left end had the same colour as the right end. The results suggest that in patients with neglect and left visual field defect, both the leftward extent and the colour of a line may be represented on the basis of the information from the attended right segment.

  17. Structural colour printing from a reusable generic nanosubstrate masked for the target image

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Jiang, H.; Kaminska, B.

    2016-02-01

    Structural colour printing has advantages over traditional pigment-based colour printing. However, the high fabrication cost has hindered its applications in printing large-area images because each image requires patterning structural pixels in nanoscale resolution. In this work, we present a novel strategy to print structural colour images from a pixelated substrate which is called a nanosubstrate. The nanosubstrate is fabricated only once using nanofabrication tools and can be reused for printing a large quantity of structural colour images. It contains closely packed arrays of nanostructures from which red, green, blue and infrared structural pixels can be imprinted. To print a target colour image, the nanosubstrate is first covered with a mask layer to block all the structural pixels. The mask layer is subsequently patterned according to the target colour image to make apertures of controllable sizes on top of the wanted primary colour pixels. The masked nanosubstrate is then used as a stamp to imprint the colour image onto a separate substrate surface using nanoimprint lithography. Different visual colours are achieved by properly mixing the red, green and blue primary colours into appropriate ratios controlled by the aperture sizes on the patterned mask layer. Such a strategy significantly reduces the cost and complexity of printing a structural colour image from lengthy nanoscale patterning into high throughput micro-patterning and makes it possible to apply structural colour printing in personalized security features and data storage. In this paper, nanocone array grating pixels were used as the structural pixels and the nanosubstrate contains structures to imprint the nanocone arrays. Laser lithography was implemented to pattern the mask layer with submicron resolution. The optical properties of the nanocone array gratings are studied in detail. Multiple printed structural colour images with embedded covert information are demonstrated.

  18. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publiclymore » available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.« less

  19. False-colour palette generation using a reference colour gamut

    NASA Astrophysics Data System (ADS)

    Green, Phil

    2015-01-01

    Monochrome images are often converted to false-colour images, in which arbitrary colours are assigned to regions of the image to aid recognition of features within the image. Criteria for selection of colour palettes vary according to the application, but may include distinctiveness, extensibility, consistency, preference, meaningfulness and universality. A method for defining a palette from colours on the surface of a reference gamut is described, which ensures that all colours in the palette have the maximum chroma available for the given hue angle in the reference gamut. The palette can be re-targeted to a reproduction medium as needed using colour management, and this method ensures consistency between cross-media colour reproductions using the palette.

  20. Simulating Colour Vision Deficiency from a Spectral Image.

    PubMed

    Shrestha, Raju

    2016-01-01

    People with colour vision deficiency (CVD) have difficulty seeing full colour contrast and can miss some of the features in a scene. As a part of universal design, researcher have been working on how to modify and enhance the colour of images in order to make them see the scene with good contrast. For this, it is important to know how the original colour image is seen by different individuals with CVD. This paper proposes a methodology to simulate accurate colour deficient images from a spectral image using cone sensitivity of different cases of deficiency. As the method enables generation of accurate colour deficient image, the methodology is believed to help better understand the limitations of colour vision deficiency and that in turn leads to the design and development of more effective imaging technologies for better and wider accessibility in the context of universal design.

  1. Semantic Segmentation and Difference Extraction via Time Series Aerial Video Camera and its Application

    NASA Astrophysics Data System (ADS)

    Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.

    2015-04-01

    Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.

  2. The colour preference control based on two-colour combinations

    NASA Astrophysics Data System (ADS)

    Hong, Ji Young; Kwak, Youngshin; Park, Du-Sik; Kim, Chang Yeong

    2008-02-01

    This paper proposes a framework of colour preference control to satisfy the consumer's colour related emotion. A colour harmony algorithm based on two-colour combinations is developed for displaying the images with several complementary colour pairs as the relationship of two-colour combination. The colours of pixels belonging to complementary colour areas in HSV colour space are shifted toward the target hue colours and there is no colour change for the other pixels. According to the developed technique, dynamic emotions by the proposed hue conversion can be improved and the controlled output image shows improved colour emotions in the preference of the human viewer. The psychophysical experiments are conducted to investigate the optimal model parameters to produce the most pleasant image to the users in the respect of colour emotions.

  3. Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices

    NASA Astrophysics Data System (ADS)

    Sentana, I. W. B.; Jawas, N.; Asri, S. A.

    2018-01-01

    Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.

  4. Object Based Image Analysis Combining High Spatial Resolution Imagery and Laser Point Clouds for Urban Land Cover

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.

  5. Use of discrete chromatic space to tune the image tone in a color image mosaic

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li

    2003-09-01

    Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.

  6. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    NASA Astrophysics Data System (ADS)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  7. The Importance of Take-Out Food Packaging Attributes: Conjoint Analysis and Quality Function Deployment Approach

    NASA Astrophysics Data System (ADS)

    Lestari Widaningrum, Dyah

    2014-03-01

    This research aims to investigate the importance of take-out food packaging attributes, using conjoint analysis and QFD approach among consumers of take-out food products in Jakarta, Indonesia. The conjoint results indicate that perception about packaging material (such as paper, plastic, and polystyrene foam) plays the most important role overall in consumer perception. The clustering results that there is strong segmentation in which take-out food packaging material consumer consider most important. Some consumers are mostly oriented toward the colour of packaging, while another segment of customers concerns on packaging shape and packaging information. Segmentation variables based on packaging response can provide very useful information to maximize image of products through the package's impact. The results of House of Quality development described that Conjoint Analysis - QFD is a useful combination of the two methodologies in product development, market segmentation, and the trade off between customers' requirements in the early stages of HOQ process

  8. Improved colour matching technique for fused nighttime imagery with daytime colours

    NASA Astrophysics Data System (ADS)

    Hogervorst, Maarten A.; Toet, Alexander

    2016-10-01

    Previously, we presented a method for applying daytime colours to fused nighttime (e.g., intensified and LWIR) imagery (Toet and Hogervorst, Opt.Eng. 51(1), 2012). Our colour mapping not only imparts a natural daylight appearance to multiband nighttime images but also enhances the contrast and visibility of otherwise obscured details. As a result, this colourizing method leads to increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness (Toet e.a., Opt.Eng.53(4), 2014). A crucial step in this colouring process is the choice of a suitable colour mapping scheme. When daytime colour images and multiband sensor images of the same scene are available the colour mapping can be derived from matching image samples (i.e., by relating colour values to sensor signal intensities). When no exact matching reference images are available the colour transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image (Toet, Info. Fus. 4(3), 2003). In the current study we investigated new colour fusion schemes that combine the advantages of the both methods, using the correspondence between multiband sensor values and daytime colours (1st method) in a smooth transformation (2nd method). We designed and evaluated three new fusion schemes that focus on: i) a closer match with the daytime luminances, ii) improved saliency of hot targets and iii) improved discriminability of materials

  9. Modelling memory colour region for preference colour reproduction

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao; Luo, Ronnier

    2010-01-01

    Colour preference adjustment is an essential step for colour image enhancement and perceptual gamut mapping. In colour reproduction for pictorial images, properly shifting colours away from their colorimetric originals may produce more preferred colour reproduction result. Memory colours, as a portion of the colour regions for colour preference adjustment, are especially important for preference colour reproduction. Identifying memory colours or modelling the memory colour region is a basic step to study preferred memory colour enhancement. In this study, we first created gamut for each memory colour region represented as a convex hull, and then used the convex hull to guide mathematical modelling to formulate the colour region for colour enhancement.

  10. Drawing ability in typical and atypical development; colour cues and the effect of oblique lines.

    PubMed

    Farran, E K; Dodd, G F

    2015-06-01

    Individuals with Williams syndrome (WS) have poor drawing ability. Here, we investigated whether colour could be used as a facilitation cue during a drawing task. Participants with WS and non-verbal ability matched typically developing (TD) children were shown line figures presented on a 3 by 3 dot matrix, and asked to replicate the figures by drawing on an empty dot matrix. The dots of the matrix were either all black (control condition), or nine different coloured dots (colour condition). In a third condition, which also used coloured dots, participants were additionally asked to verbalise the colours of the dots prior to replicating the line drawings (colour-verbal condition). Performance was stronger in both WS and TD groups on the two coloured conditions, compared with the control condition. However, the facilitation effect of colour was significantly weaker in the WS group than in the TD group. Replication of oblique line segments was less successful than replication of non-oblique line segments for both groups; this effect was reduced by colour facilitation in the TD group only. Verbalising the colours had no additional impact on performance in either group. We suggest that colour acted as a cue to individuate the dots, thus enabling participants to better ascertain the spatial relationships between the parts of each figure, to determine the start and end points of component lines, and to determine the correspondence between the model and their replication. The reduced facilitation in the WS group is discussed in relation to the effect of oblique versus non-oblique lines, the use of atypical drawing strategies, and reduced attention to the model when drawing the replication. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  11. Field-Portable Pixel Super-Resolution Colour Microscope

    PubMed Central

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742

  12. Field-portable pixel super-resolution colour microscope.

    PubMed

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.

  13. Skin colour assessment of replanted fingers in digital images and its reliability for the incorporation of images in nursing progress notes.

    PubMed

    Terashima, Taiko; Yoshimura, Sadako

    2018-03-01

    To determine whether nurses can accurately assess the skin colour of replanted fingers displayed as digital images on a computer screen. Colour measurement and clinical diagnostic methods for medical digital images have been studied, but reproducing skin colour on a computer screen remains difficult. The inter-rater reliability of skin colour assessment scores was evaluated. In May 2014, 21 nurses who worked on a trauma ward in Japan participated in testing. Six digital images with different skin colours were used. Colours were scored from both digital images and direct patient's observation. The score from a digital image was defined as the test score, and its difference from the direct assessment score as the difference score. Intraclass correlation coefficients were calculated. Nurses' opinions were classified and summarised. The intraclass correlation coefficients for the test scores were fair. Although the intraclass correlation coefficients for the difference scores were poor, they improved to good when three images that might have contributed to poor reliability were excluded. Most nurses stated that it is difficult to assess skin colour in digital images; they did not think it could be a substitute for direct visual assessment. However, most nurses were in favour of including images in nursing progress notes. Although the inter-rater reliability was fairly high, the reliability of colour reproduction in digital images as indicated by the difference scores was poor. Nevertheless, nurses expect the incorporation of digital images in nursing progress notes to be useful. This gap between the reliability of digital colour reproduction and nurses' expectations towards it must be addressed. High inter-rater reliability for digital images in nursing progress notes was not observed. Assessments of future improvements in colour reproduction technologies are required. Further digitisation and visualisation of nursing records might pose challenges. © 2017 John Wiley & Sons Ltd.

  14. Using digital colour to increase the realistic appearance of SEM micrographs of bloodstains.

    PubMed

    Hortolà, Policarp

    2010-10-01

    Although in the scientific-research literature the micrographs from scanning electron microscopes (SEMs) are usually displayed in greyscale, the potential of colour resources provided by the SEM-coupled image-acquiring systems and, subsidiarily, by image-manipulation free softwares deserves be explored as a tool for colouring SEM micrographs of bloodstains. After acquiring greyscale SEM micrographs of a (dark red to the naked eye) human blood smear on grey chert, they were manually obtained in red tone using both the SEM-coupled image-acquiring system and an image-manipulation free software, as well as they were automatically generated in thermal tone using the SEM-coupled system. Red images obtained by the SEM-coupled system demonstrated lower visual-discrimination capability than the other coloured images, whereas those in red generated by the free software rendered better magnitude of scopic information than the red images generated by the SEM-coupled system. Thermal-tone images, although were further from the real sample colour than the red ones, not only increased their realistic appearance over the greyscale images, but also yielded the best visual-discrimination capability among all the coloured SEM micrographs, and fairly enhanced the relief effect of the SEM micrographs over both the greyscale and the red images. The application of digital colour by means of the facilities provided by an SEM-coupled image-acquiring system or, when required, by an image-manipulation free software provides a user-friendly, quick and inexpensive way of obtaining coloured SEM micrographs of bloodstains, avoiding to do sophisticated, time-consuming colouring procedures. Although this work was focused on bloodstains, well probably other monochromatic or quasi-monochromatic samples are also susceptible of increasing their realistic appearance by colouring them using the simple methods utilized in this study.

  15. An RGB colour image steganography scheme using overlapping block-based pixel-value differencing

    PubMed Central

    Pal, Arup Kumar

    2017-01-01

    This paper presents a steganographic scheme based on the RGB colour cover image. The secret message bits are embedded into each colour pixel sequentially by the pixel-value differencing (PVD) technique. PVD basically works on two consecutive non-overlapping components; as a result, the straightforward conventional PVD technique is not applicable to embed the secret message bits into a colour pixel, since a colour pixel consists of three colour components, i.e. red, green and blue. Hence, in the proposed scheme, initially the three colour components are represented into two overlapping blocks like the combination of red and green colour components, while another one is the combination of green and blue colour components, respectively. Later, the PVD technique is employed on each block independently to embed the secret data. The two overlapping blocks are readjusted to attain the modified three colour components. The notion of overlapping blocks has improved the embedding capacity of the cover image. The scheme has been tested on a set of colour images and satisfactory results have been achieved in terms of embedding capacity and upholding the acceptable visual quality of the stego-image. PMID:28484623

  16. Computer aided solution for segmenting the neuron line in hippocampal microscope images

    NASA Astrophysics Data System (ADS)

    Albaidhani, Tahseen; Jassim, Sabah; Al-Assam, Hisham

    2017-05-01

    The brain Hippocampus component is known to be responsible for memory and spatial navigation. Its functionality depends on the status of different blood vessels within the Hippocampus and is severely impaired by Alzheimer's disease as a result blockage of increasing number of blood vessels by accumulation of amyloid-beta (Aβ) protein. Accurate counting of blood vessels within the Hippocampus of mice brain, from microscopic images, is an active research area for the understanding of Alzheimer's disease. Here, we report our work on automatic detection of the Region of Interest, i.e. the region in which blood vessels are located. This area typically falls between the hippocampus edge and the line of neurons within the Hippocampus. This paper proposes a new method to detect and exclude the neuron line to improve the accuracy of blood vessel counting because some neurons on it might lead to false positive cases as they look like blood vessels. Our proposed solution is based on using trainable segmentation approach with morphological operations, taking into account variation in colour, intensity values, and image texture. Experiments on a sufficient number of microscopy images of mouse brain demonstrate the effectiveness of the developed solution in preparation for blood vessels counting.

  17. [Digital processing and evaluation of ultrasound images].

    PubMed

    Borchers, J; Klews, P M

    1993-10-01

    With the help of workstations and PCs, on-site image processing has become possible. If the images are not available in digital form the video signal has to be A/D converted. In the case of colour images the colour channels R (red), G (green) and B (blue) have to be digitized separately. "Truecolour" imaging calls for an 8 bit resolution per channel, leading to 24 bits per pixel. Out of a pool of 2(24) possible values only the relevant 128 gray values and 64 shades of red and blue respectively needed for a colour-coded ultrasound image have to be isolated. Digital images can be changed and evaluated with the help of readily available image evaluation programmes. It is mandatory that during image manipulation the gray scale and colour pixels and LUTs (Look-Up-Table) must be worked on separately. Using relatively simple LUT manipulations astonishing image improvements are possible. Application of simple mathematical operations can lead to completely new clinical results. For example, by subtracting two consecutive colour flow images in time and special LUT operations, local acceleration of blood flow can be visualized (Colour Acceleration Imaging).

  18. Robust traffic sign detection using fuzzy shape recognizer

    NASA Astrophysics Data System (ADS)

    Li, Lunbo; Li, Jun; Sun, Jianhong

    2009-10-01

    A novel fuzzy approach for the detection of traffic signs in natural environments is presented. More than 3000 road images were collected under different weather conditions by a digital camera, and used for testing this approach. Every RGB image was converted into HSV colour space, and segmented by the hue and saturation thresholds. A symmetrical detector was used to extract the local features of the regions of interest (ROI), and the shape of ROI was determined by a fuzzy shape recognizer which invoked a set of fuzzy rules. The experimental results show that the proposed algorithm is translation, rotation and scaling invariant, and gives reliable shape recognition in complex traffic scenes where clustering and partial occlusion normally occur.

  19. Image Simulation and Assessment of the Colour and Spatial Capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on the ExoMars Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tornabene, Livio L.; Seelos, Frank P.; Pommerol, Antoine; Thomas, Nicholas; Caudill, C. M.; Becerra, Patricio; Bridges, John C.; Byrne, Shane; Cardinale, Marco; Chojnacki, Matthew; Conway, Susan J.; Cremonese, Gabriele; Dundas, Colin M.; El-Maarry, M. R.; Fernando, Jennifer; Hansen, Candice J.; Hansen, Kayle; Harrison, Tanya N.; Henson, Rachel; Marinangeli, Lucia; McEwen, Alfred S.; Pajola, Maurizio; Sutton, Sarah S.; Wray, James J.

    2018-02-01

    This study aims to assess the spatial and visible/near-infrared (VNIR) colour/spectral capabilities of the 4-band Colour and Stereo Surface Imaging System (CaSSIS) aboard the ExoMars 2016 Trace Grace Orbiter (TGO). The instrument response functions for the CaSSIS imager was used to resample spectral libraries, modelled spectra and to construct spectrally ( i.e., in I/F space) and spatially consistent simulated CaSSIS image cubes of various key sites of interest and for ongoing scientific investigations on Mars. Coordinated datasets from Mars Reconnaissance Orbiter (MRO) are ideal, and specifically used for simulating CaSSIS. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) provides colour information, while the Context Imager (CTX), and in a few cases the High-Resolution Imaging Science Experiment (HiRISE), provides the complementary spatial information at the resampled CaSSIS unbinned/unsummed pixel resolution (4.6 m/pixel from a 400-km altitude). The methodology used herein employs a Gram-Schmidt spectral sharpening algorithm to combine the ˜18-36 m/pixel CRISM-derived CaSSIS colours with I/F images primarily derived from oversampled CTX images. One hundred and eighty-one simulated CaSSIS 4-colour image cubes (at 18-36 m/pixel) were generated (including one of Phobos) based on CRISM data. From these, thirty-three "fully"-simulated image cubes of thirty unique locations on Mars ( i.e., with 4 colour bands at 4.6 m/pixel) were made. All simulated image cubes were used to test both the colour capabilities of CaSSIS by producing standard colour RGB images, colour band ratio composites (CBRCs) and spectral parameters. Simulated CaSSIS CBRCs demonstrated that CaSSIS will be able to readily isolate signatures related to ferrous (Fe2+) iron- and ferric (Fe3+) iron-bearing deposits on the surface of Mars, ices and atmospheric phenomena. Despite the lower spatial resolution of CaSSIS when compared to HiRISE, the results of this work demonstrate that CaSSIS will not only compliment HiRISE-scale studies of various geological and seasonal phenomena, it will also enhance them by providing additional colour and geologic context through its wider and longer full-colour coverage (˜9.4 × 50 km), and its increased sensitivity to iron-bearing materials from its two IR bands (RED and NIR). In a few examples, subtle surface changes that were not easily detected by HiRISE were identified in the simulated CaSSIS images. This study also demonstrates the utility of the Gram-Schmidt spectral pan-sharpening technique to extend VNIR colour/spectral capabilities from a lower spatial resolution colour/spectral dataset to a single-band or panchromatic image greyscale image with higher resolution. These higher resolution colour products (simulated CaSSIS or otherwise) are useful as means to extend both geologic context and mapping of datasets with coarser spatial resolutions. The results of this study indicate that the TGO mission objectives, as well as the instrument-specific mission objectives, will be achievable with CaSSIS.

  20. Optimisation of colour schemes to accurately display mass spectrometry imaging data based on human colour perception.

    PubMed

    Race, Alan M; Bunch, Josephine

    2015-03-01

    The choice of colour scheme used to present data can have a dramatic effect on the perceived structure present within the data. This is of particular significance in mass spectrometry imaging (MSI), where ion images that provide 2D distributions of a wide range of analytes are used to draw conclusions about the observed system. Commonly employed colour schemes are generally suboptimal for providing an accurate representation of the maximum amount of data. Rainbow-based colour schemes are extremely popular within the community, but they introduce well-documented artefacts which can be actively misleading in the interpretation of the data. In this article, we consider the suitability of colour schemes and composite image formation found in MSI literature in the context of human colour perception. We also discuss recommendations of rules for colour scheme selection for ion composites and multivariate analysis techniques such as principal component analysis (PCA).

  1. Land-based crop phenotyping by image analysis: consistent canopy characterization from inconsistent field illumination.

    PubMed

    Chopin, Joshua; Kumar, Pankaj; Miklavcic, Stanley J

    2018-01-01

    One of the main challenges associated with image-based field phenotyping is the variability of illumination. During a single day's imaging session, or between different sessions on different days, the sun moves in and out of cloud cover and has varying intensity. How is one to know from consecutive images alone if a plant has become darker over time, or if the weather conditions have simply changed from clear to overcast? This is a significant problem to address as colour is an important phenotypic trait that can be measured automatically from images. In this work we use an industry standard colour checker to balance the colour in images within and across every day of a field trial conducted over four months in 2016. By ensuring that the colour checker is present in every image we are afforded a 'ground truth' to correct for varying illumination conditions across images. We employ a least squares approach to fit a quadratic model for correcting RGB values of an image in such a way that the observed values of the colour checker tiles align with their true values after the transformation. The proposed method is successful in reducing the error between observed and reference colour chart values in all images. Furthermore, the standard deviation of mean canopy colour across multiple days is reduced significantly after colour correction is applied. Finally, we use a number of examples to demonstrate the usefulness of accurate colour measurements in recording phenotypic traits and analysing variation among varieties and treatments.

  2. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  3. Printing colour at the optical diffraction limit.

    PubMed

    Kumar, Karthik; Duan, Huigao; Hegde, Ravi S; Koh, Samuel C W; Wei, Jennifer N; Yang, Joel K W

    2012-09-01

    The highest possible resolution for printed colour images is determined by the diffraction limit of visible light. To achieve this limit, individual colour elements (or pixels) with a pitch of 250 nm are required, translating into printed images at a resolution of ∼100,000 dots per inch (d.p.i.). However, methods for dispensing multiple colourants or fabricating structural colour through plasmonic structures have insufficient resolution and limited scalability. Here, we present a non-colourant method that achieves bright-field colour prints with resolutions up to the optical diffraction limit. Colour information is encoded in the dimensional parameters of metal nanostructures, so that tuning their plasmon resonance determines the colours of the individual pixels. Our colour-mapping strategy produces images with both sharp colour changes and fine tonal variations, is amenable to large-volume colour printing via nanoimprint lithography, and could be useful in making microimages for security, steganography, nanoscale optical filters and high-density spectrally encoded optical data storage.

  4. Colour in digital pathology: a review.

    PubMed

    Clarke, Emily L; Treanor, Darren

    2017-01-01

    Colour is central to the practice of pathology because of the use of coloured histochemical and immunohistochemical stains to visualize tissue features. Our reliance upon histochemical stains and light microscopy has evolved alongside a wide variation in slide colour, with little investigation into the implications of colour variation. However, the introduction of the digital microscope and whole-slide imaging has highlighted the need for further understanding and control of colour. This is because the digitization process itself introduces further colour variation which may affect diagnosis, and image analysis algorithms often use colour or intensity measures to detect or measure tissue features. The US Food and Drug Administration have released recent guidance stating the need to develop a method of controlling colour reproduction throughout the digitization process in whole-slide imaging for primary diagnostic use. This comprehensive review introduces applied basic colour physics and colour interpretation by the human visual system, before discussing the importance of colour in pathology. The process of colour calibration and its application to pathology are also included, as well as a summary of the current guidelines and recommendations regarding colour in digital pathology. © 2016 John Wiley & Sons Ltd.

  5. Applying colour science in colour design

    NASA Astrophysics Data System (ADS)

    Luo, Ming Ronnier

    2006-06-01

    Although colour science has been widely used in a variety of industries over the years, it has not been fully explored in the field of product design. This paper will initially introduce the three main application fields of colour science: colour specification, colour-difference evaluation and colour appearance modelling. By integrating these advanced colour technologies together with modern colour imaging devices such as display, camera, scanner and printer, some computer systems have been recently developed to assist designers for designing colour palettes through colour selection by means of a number of widely used colour order systems, for creating harmonised colour schemes via a categorical colour system, for generating emotion colours using various colour emotional scales and for facilitating colour naming via a colour-name library. All systems are also capable of providing accurate colour representation on displays and output to different imaging devices such as printers.

  6. Image simulation and assessment of the colour and spatial capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on the ExoMars Trace Gas Orbiter

    USGS Publications Warehouse

    Tornabene, Livio L.; Seelos, Frank P.; Pommerol, Antoine; Thomas, Nicolas; Caudill, Christy M.; Becerra, Patricio; Bridges, John C.; Byrne, Shane; Cardinale, Marco; Chojnacki, Matthew; Conway, Susan J.; Cremonese, Gabriele; Dundas, Colin M.; El-Maarry, M. R.; Fernando, Jennifer; Hansen, Candice J.; Hansen, Kayle; Harrison, Tanya N.; Henson, Rachel; Marinangeli, Lucia; McEwen, Alfred S.; Pajola, Maurizio; Sutton, Sarah S.; Wray, James J.

    2018-01-01

    This study aims to assess the spatial and visible/near-infrared (VNIR) colour/spectral capabilities of the 4-band Colour and Stereo Surface Imaging System (CaSSIS) aboard the ExoMars 2016 Trace Grace Orbiter (TGO). The instrument response functions for the CaSSIS imager was used to resample spectral libraries, modelled spectra and to construct spectrally (i.e., in I/F space) and spatially consistent simulated CaSSIS image cubes of various key sites of interest and for ongoing scientific investigations on Mars. Coordinated datasets from Mars Reconnaissance Orbiter (MRO) are ideal, and specifically used for simulating CaSSIS. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) provides colour information, while the Context Imager (CTX), and in a few cases the High-Resolution Imaging Science Experiment (HiRISE), provides the complementary spatial information at the resampled CaSSIS unbinned/unsummed pixel resolution (4.6 m/pixel from a 400-km altitude). The methodology used herein employs a Gram-Schmidt spectral sharpening algorithm to combine the ∼18–36 m/pixel CRISM-derived CaSSIS colours with I/F images primarily derived from oversampled CTX images. One hundred and eighty-one simulated CaSSIS 4-colour image cubes (at 18–36 m/pixel) were generated (including one of Phobos) based on CRISM data. From these, thirty-three “fully”-simulated image cubes of thirty unique locations on Mars (i.e., with 4 colour bands at 4.6 m/pixel) were made. All simulated image cubes were used to test both the colour capabilities of CaSSIS by producing standard colour RGB images, colour band ratio composites (CBRCs) and spectral parameters. Simulated CaSSIS CBRCs demonstrated that CaSSIS will be able to readily isolate signatures related to ferrous (Fe2+) iron- and ferric (Fe3+) iron-bearing deposits on the surface of Mars, ices and atmospheric phenomena. Despite the lower spatial resolution of CaSSIS when compared to HiRISE, the results of this work demonstrate that CaSSIS will not only compliment HiRISE-scale studies of various geological and seasonal phenomena, it will also enhance them by providing additional colour and geologic context through its wider and longer full-colour coverage (∼9.4×50">∼9.4×50∼9.4×50 km), and its increased sensitivity to iron-bearing materials from its two IR bands (RED and NIR). In a few examples, subtle surface changes that were not easily detected by HiRISE were identified in the simulated CaSSIS images. This study also demonstrates the utility of the Gram-Schmidt spectral pan-sharpening technique to extend VNIR colour/spectral capabilities from a lower spatial resolution colour/spectral dataset to a single-band or panchromatic image greyscale image with higher resolution. These higher resolution colour products (simulated CaSSIS or otherwise) are useful as means to extend both geologic context and mapping of datasets with coarser spatial resolutions. The results of this study indicate that the TGO mission objectives, as well as the instrument-specific mission objectives, will be achievable with CaSSIS.

  7. Automated colour identification in melanocytic lesions.

    PubMed

    Sabbaghi, S; Aldeen, M; Garnavi, R; Varigos, G; Doliantis, C; Nicolopoulos, J

    2015-08-01

    Colour information plays an important role in classifying skin lesion. However, colour identification by dermatologists can be very subjective, leading to cases of misdiagnosis. Therefore, a computer-assisted system for quantitative colour identification is highly desirable for dermatologists to use. Although numerous colour detection systems have been developed, few studies have focused on imitating the human visual perception of colours in melanoma application. In this paper we propose a new methodology based on QuadTree decomposition technique for automatic colour identification in dermoscopy images. Our approach mimics the human perception of lesion colours. The proposed method is trained on a set of 47 images from NIH dataset and applied to a test set of 190 skin lesions obtained from PH2 dataset. The results of our proposed method are compared with a recently reported colour identification method using the same dataset. The effectiveness of our method in detecting colours in dermoscopy images is vindicated by obtaining approximately 93% accuracy when the CIELab1 colour space is used.

  8. Colour analysis and verification of CCTV images under different lighting conditions

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; MacLennan-Brown, K.; Tighe, J. F.; Cohen, N.; Triantaphillidou, S.; MacDonald, L. W.

    2008-01-01

    Colour information is not faithfully maintained by a CCTV imaging chain. Since colour can play an important role in identifying objects it is beneficial to be able to account accurately for changes to colour introduced by components in the chain. With this information it will be possible for law enforcement agencies and others to work back along the imaging chain to extract accurate colour information from CCTV recordings. A typical CCTV system has an imaging chain that may consist of scene, camera, compression, recording media and display. The response of each of these stages to colour scene information was characterised by measuring its response to a known input. The main variables that affect colour within a scene are illumination and the colour, orientation and texture of objects. The effects of illumination on the appearance of colour of a variety of test targets were tested using laboratory-based lighting, street lighting, car headlights and artificial daylight. A range of typical cameras used in CCTV applications, common compression schemes and representative displays were also characterised.

  9. Improved image retrieval based on fuzzy colour feature vector

    NASA Astrophysics Data System (ADS)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  10. Fundus autofluorescence and colour fundus imaging compared during telemedicine screening in patients with diabetes.

    PubMed

    Kolomeyer, Anton M; Baumrind, Benjamin R; Szirth, Bernard C; Shahid, Khadija; Khouri, Albert S

    2013-06-01

    We investigated the use of fundus autofluorescence (FAF) imaging in screening the eyes of patients with diabetes. Images were obtained from 50 patients with type 2 diabetes undergoing telemedicine screening with colour fundus imaging. The colour and FAF images were obtained with a 15.1 megapixel non-mydriatic retinal camera. Colour and FAF images were compared for pathology seen in nonproliferative and proliferative diabetic retinopathy (NPDR and PDR, respectively). A qualitative assessment was made of the ease of detecting early retinopathy changes and the extent of existing retinopathy. The mean age of the patients was 47 years, most were male (82%) and most were African American (68%). Their mean visual acuity was 20/45 and their mean intraocular pressure was 14.3 mm Hg. Thirty-eight eyes (76%) did not show any diabetic retinopathy changes on colour or FAF imaging. Seven patients (14%) met the criteria for NPDR and five (10%) for severe NPDR or PDR. The most common findings were microaneurysms, hard exudates and intra-retinal haemorrhages (IRH) (n = 6 for each). IRH, microaneurysms and chorioretinal scars were more easily visible on FAF images. Hard exudates, pre-retinal haemorrhage and fibrosis, macular oedema and Hollenhorst plaque were easier to identify on colour photographs. The value of FAF imaging as a complementary technique to colour fundus imaging in detecting diabetic retinopathy during ocular screening warrants further investigation.

  11. The influence of the microscope lamp filament colour temperature on the process of digital images of histological slides acquisition standardization.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz

    2014-01-01

    The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.

  12. Global Interior Robot Localisation by a Colour Content Image Retrieval System

    NASA Astrophysics Data System (ADS)

    Chaari, A.; Lelandais, S.; Montagne, C.; Ahmed, M. Ben

    2007-12-01

    We propose a new global localisation approach to determine a coarse position of a mobile robot in structured indoor space using colour-based image retrieval techniques. We use an original method of colour quantisation based on the baker's transformation to extract a two-dimensional colour pallet combining as well space and vicinity-related information as colourimetric aspect of the original image. We conceive several retrieving approaches bringing to a specific similarity measure [InlineEquation not available: see fulltext.] integrating the space organisation of colours in the pallet. The baker's transformation provides a quantisation of the image into a space where colours that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image. Whereas the distance [InlineEquation not available: see fulltext.] provides for partial invariance to translation, sight point small changes, and scale factor. In addition to this study, we developed a hierarchical search module based on the logic classification of images following rooms. This hierarchical module reduces the searching indoor space and ensures an improvement of our system performances. Results are then compared with those brought by colour histograms provided with several similarity measures. In this paper, we focus on colour-based features to describe indoor images. A finalised system must obviously integrate other type of signature like shape and texture.

  13. Optimality of the basic colour categories for classification

    PubMed Central

    Griffin, Lewis D

    2005-01-01

    Categorization of colour has been widely studied as a window into human language and cognition, and quite separately has been used pragmatically in image-database retrieval systems. This suggests the hypothesis that the best category system for pragmatic purposes coincides with human categories (i.e. the basic colours). We have tested this hypothesis by assessing the performance of different category systems in a machine-vision task. The task was the identification of the odd-one-out from triples of images obtained using a web-based image-search service. In each triple, two of the images had been retrieved using the same search term, the other a different term. The terms were simple concrete nouns. The results were as follows: (i) the odd-one-out task can be performed better than chance using colour alone; (ii) basic colour categorization performs better than random systems of categories; (iii) a category system that performs better than the basic colours could not be found; and (iv) it is not just the general layout of the basic colours that is important, but also the detail. We conclude that (i) the results support the plausibility of an explanation for the basic colours as a result of a pressure-to-optimality and (ii) the basic colours are good categories for machine vision image-retrieval systems. PMID:16849219

  14. Statistical colour models: an automated digital image analysis method for quantification of histological biomarkers.

    PubMed

    Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad

    2016-04-27

    Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.

  15. Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors.

    PubMed

    Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon

    2015-01-12

    Complementary metal-oxide-semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor.

  16. Organic-on-silicon complementary metal–oxide–semiconductor colour image sensors

    PubMed Central

    Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon

    2015-01-01

    Complementary metal–oxide–semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor. PMID:25578322

  17. Omega-3 chicken egg detection system using a mobile-based image processing segmentation method

    NASA Astrophysics Data System (ADS)

    Nurhayati, Oky Dwi; Kurniawan Teguh, M.; Cintya Amalia, P.

    2017-02-01

    An Omega-3 chicken egg is a chicken egg produced through food engineering technology. It is produced by hen fed with high omega-3 fatty acids. So, it has fifteen times nutrient content of omega-3 higher than Leghorn's. Visually, its shell has the same shape and colour as Leghorn's. Each egg can be distinguished by breaking the egg's shell and testing the egg yolk's nutrient content in a laboratory. But, those methods were proven not effective and efficient. Observing this problem, the purpose of this research is to make an application to detect the type of omega-3 chicken egg by using a mobile-based computer vision. This application was built in OpenCV computer vision library to support Android Operating System. This experiment required some chicken egg images taken using an egg candling box. We used 60 omega-3 chicken and Leghorn eggs as samples. Then, using an Android smartphone, image acquisition of the egg was obtained. After that, we applied several steps using image processing methods such as Grab Cut, convert RGB image to eight bit grayscale, median filter, P-Tile segmentation, and morphology technique in this research. The next steps were feature extraction which was used to extract feature values via mean, variance, skewness, and kurtosis from each image. Finally, using digital image measurement, some chicken egg images were classified. The result showed that omega-3 chicken egg and Leghorn egg had different values. This system is able to provide accurate reading around of 91%.

  18. Colour for Behavioural Success.

    PubMed

    Dresp-Langley, Birgitta; Reeves, Adam

    2018-01-01

    Colour information not only helps sustain the survival of animal species by guiding sexual selection and foraging behaviour but also is an important factor in the cultural and technological development of our own species. This is illustrated by examples from the visual arts and from state-of-the-art imaging technology, where the strategic use of colour has become a powerful tool for guiding the planning and execution of interventional procedures. The functional role of colour information in terms of its potential benefits to behavioural success across the species is addressed in the introduction here to clarify why colour perception may have evolved to generate behavioural success. It is argued that evolutionary and environmental pressures influence not only colour trait production in the different species but also their ability to process and exploit colour information for goal-specific purposes. We then leap straight to the human primate with insight from current research on the facilitating role of colour cues on performance training with precision technology for image-guided surgical planning and intervention. It is shown that local colour cues in two-dimensional images generated by a surgical fisheye camera help individuals become more precise rapidly across a limited number of trial sets in simulator training for specific manual gestures with a tool. This facilitating effect of a local colour cue on performance evolution in a video-controlled simulator (pick-and-place) task can be explained in terms of colour-based figure-ground segregation facilitating attention to local image parts when more than two layers of subjective surface depth are present, as in all natural and surgical images.

  19. Colour for Behavioural Success

    PubMed Central

    Reeves, Adam

    2018-01-01

    Colour information not only helps sustain the survival of animal species by guiding sexual selection and foraging behaviour but also is an important factor in the cultural and technological development of our own species. This is illustrated by examples from the visual arts and from state-of-the-art imaging technology, where the strategic use of colour has become a powerful tool for guiding the planning and execution of interventional procedures. The functional role of colour information in terms of its potential benefits to behavioural success across the species is addressed in the introduction here to clarify why colour perception may have evolved to generate behavioural success. It is argued that evolutionary and environmental pressures influence not only colour trait production in the different species but also their ability to process and exploit colour information for goal-specific purposes. We then leap straight to the human primate with insight from current research on the facilitating role of colour cues on performance training with precision technology for image-guided surgical planning and intervention. It is shown that local colour cues in two-dimensional images generated by a surgical fisheye camera help individuals become more precise rapidly across a limited number of trial sets in simulator training for specific manual gestures with a tool. This facilitating effect of a local colour cue on performance evolution in a video-controlled simulator (pick-and-place) task can be explained in terms of colour-based figure-ground segregation facilitating attention to local image parts when more than two layers of subjective surface depth are present, as in all natural and surgical images. PMID:29770183

  20. A comparison of hair colour measurement by digital image analysis with reflective spectrophotometry.

    PubMed

    Vaughn, Michelle R; van Oorschot, Roland A H; Baindur-Hudson, Swati

    2009-01-10

    While reflective spectrophotometry is an established method for measuring macroscopic hair colour, it can be cumbersome to use on a large number of individuals and not all reflective spectrophotometry instruments are easily portable. This study investigates the use of digital photographs to measure hair colour and compares its use to reflective spectrophotometry. An understanding of the accuracy of colour determination by these methods is of relevance when undertaking specific investigations, such as those on the genetics of hair colour. Measurements of hair colour may also be of assistance in cases where a photograph is the only evidence of hair colour available (e.g. surveillance). Using the CIE L(*)a(*)b(*) colour space, the hair colour of 134 individuals of European ancestry was measured by both reflective spectrophotometry and by digital image analysis (in V++). A moderate correlation was found along all three colour axes, with Pearson correlation coefficients of 0.625, 0.593 and 0.513 for L(*), a(*) and b(*) respectively (p-values=0.000), with means being significantly overestimated by digital image analysis for all three colour components (by an average of 33.42, 3.38 and 8.00 for L(*), a(*) and b(*) respectively). When using digital image data to group individuals into clusters previously determined by reflective spectrophotometric analysis using a discriminant analysis, individuals were classified into the correct clusters 85.8% of the time when there were two clusters. The percentage of cases correctly classified decreases as the number of clusters increases. It is concluded that, although more convenient, hair colour measurement from digital images has limited use in situations requiring accurate and consistent measurements.

  1. Colour measurements of pigmented rice grain using flatbed scanning and image analysis

    NASA Astrophysics Data System (ADS)

    Kaisaat, Khotchakorn; Keawdonree, Nuttapong; Chomkokard, Sakchai; Jinuntuya, Noparit; Pattanasiri, Busara

    2017-09-01

    Recently, the National Bureau of Agricultural Commodity and Food Standards (ACFS) have drafted a manual of Thai colour rice standards. However, there are no quantitative descriptions of rice colour and its measurement method. These drawbacks might lead to misunderstanding for people who use the manual. In this work, we proposed an inexpensive method, using flatbed scanning together with image analysis, to quantitatively measure rice colour and colour uniformity. To demonstrate its general applicability for colour differentiation of rice, we applied it to different kinds of pigmented rice, including Riceberry rice with and without uniform colour and Chinese black rice.

  2. Demosaicing images from colour cameras for digital image correlation

    NASA Astrophysics Data System (ADS)

    Forsey, A.; Gungor, S.

    2016-11-01

    Digital image correlation is not the intended use for consumer colour cameras, but with care they can be successfully employed in such a role. The main obstacle is the sparsely sampled colour data caused by the use of a colour filter array (CFA) to separate the colour channels. It is shown that the method used to convert consumer camera raw files into a monochrome image suitable for digital image correlation (DIC) can have a significant effect on the DIC output. A number of widely available software packages and two in-house methods are evaluated in terms of their performance when used with DIC. Using an in-plane rotating disc to produce a highly constrained displacement field, it was found that the bicubic spline based in-house demosaicing method outperformed the other methods in terms of accuracy and aliasing suppression.

  3. Post-mortem quetiapine concentrations in hair segments of psychiatric patients - Correlation between hair concentration, dose and concentration in blood.

    PubMed

    Günther, Kamilla Nyborg; Johansen, Sys Stybe; Nielsen, Marie Katrine Klose; Wicktor, Petra; Banner, Jytte; Linnet, Kristian

    2018-04-01

    Drug analysis in hair is useful when seeking to establish drug intake over a period of months to years. Segmental hair analysis can also document whether psychiatric patients are receiving a stable intake of antipsychotics. This study describes segmental analysis of the antipsychotic drug quetiapine in post-mortem hair samples from long-term quetiapine users by ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis. The aim was to obtain more knowledge on quetiapine concentrations in hair and to relate the concentration in hair to the administered dose and the post-mortem concentration in femoral blood. We analyzed hair samples from 22 deceased quetiapine-treated individuals, who were divided into two groups: natural hair colour and dyed/bleached hair. Two to six 1cm long segments were analyzed per individual, depending on the length of the hair, with 6cm corresponding to the last six months before death. The average daily quetiapine dose and average concentration in hair for the last six months prior to death were examined for potential correlation. Estimated doses ranged from 45 to 1040mg quetiapine daily over the period, and the average concentration in hair ranged from 0.18 to 13ng/mg. A significant positive correlation was observed between estimated daily dosage of quetiapine and average concentration in hair for individuals with natural hair colour (p=0.00005), but statistical significance was not reached for individuals with dyed/bleached hair (p=0.31). The individual coefficient of variation (CV) of the quetiapine concentrations between segments ranged from 3 to 34% for individuals with natural hair colour and 22-62% for individuals with dyed/bleached hair. Dose-adjusted concentrations in hair were significantly lower in females with dyed/bleached hair than in individuals with natural hair colour. The quetiapine concentrations in post-mortem femoral blood and in the proximal hair segment, segment 1 (S1), representing the last month before death were also investigated for correlation. A significant positive correlation was observed between quetiapine concentrations in blood at the time of death and concentrations in S1 for individuals with natural hair colour (p=0.003) but not for individuals with dyed/bleached hair (p=0.31). The blood concentrations of quetiapine ranged from 0.006 to 1.9mg/kg, and the quetiapine concentrations in S1 ranged from 0.22 to 24ng/mg. The results of this study suggest a positive correlation of quetiapine between both concentrations in hair and doses, and between proximal hair (S1) and blood concentrations, when conditions such as hair treatments are taken into consideration. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Innovation in the imaging perianal fistula: a step towards personalised medicine

    PubMed Central

    Sahnan, Kapil; Adegbola, Samuel O.; Tozer, Philip J.; Patel, Uday; Ilangovan, Rajpandian; Warusavitarne, Janindra; Faiz, Omar D.; Hart, Ailsa L.; Phillips, Robin K. S.; Lung, Phillip F. C.

    2018-01-01

    Background: Perianal fistula is a topic both hard to understand and to teach. The key to understanding the treatment options and the likely success is deciphering the exact morphology of the tract(s) and the amount of sphincter involved. Our aim was to explore alternative platforms better to understand complex perianal fistulas through three-dimensional (3D) imaging and reconstruction. Methods: Digital imaging and communications in medicine images of spectral attenuated inversion recovery magnetic resonance imaging (MRI) sequences were imported onto validated open-source segmentation software. A specialist consultant gastrointestinal radiologist performed segmentation of the fistula, internal and external sphincter. Segmented files were exported as stereolithography files. Cura (Ultimaker Cura 3.0.4) was used to prepare the files for printing on an Ultimaker 3 Extended 3D printer. Animations were created in collaboration with Touch Surgery™. Results: Three examples of 3D printed models demonstrating complex perianal fistula were created. The anatomical components are displayed in different colours: red: fistula tract; green: external anal sphincter and levator plate; blue: internal anal sphincter and rectum. One of the models was created to be split in half, to display the internal opening and allow complexity in the intersphincteric space to better evaluated. An animation of MRI fistulography of a trans-sphincteric fistula tract with a cephalad extension in the intersphincteric space was also created. Conclusion: MRI is the reference standard for assessment of perianal fistula, defining anatomy and guiding surgery. However, communication of findings between radiologist and surgeon remains challenging. Feasibility of 3D reconstructions of complex perianal fistula is realized, with the potential to improve surgical planning, communication with patients, and augment training. PMID:29854001

  5. Quantifying plant colour and colour difference as perceived by humans using digital images.

    PubMed

    Kendal, Dave; Hauser, Cindy E; Garrard, Georgia E; Jellinek, Sacha; Giljohann, Katherine M; Moore, Joslin L

    2013-01-01

    Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management.

  6. Quantifying Plant Colour and Colour Difference as Perceived by Humans Using Digital Images

    PubMed Central

    Kendal, Dave; Hauser, Cindy E.; Garrard, Georgia E.; Jellinek, Sacha; Giljohann, Katherine M.; Moore, Joslin L.

    2013-01-01

    Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management. PMID:23977275

  7. The Colour and Stereo Surface Imaging System (CaSSIS) for the ExoMars Trace Gas Orbiter

    USGS Publications Warehouse

    Thomas, N.; Cremonese, G.; Ziethe, R.; Gerber, M.; Brändli, M.; Bruno, G.; Erismann, M.; Gambicorti, L.; Gerber, T.; Ghose, K.; Gruber, M.; Gubler, P.; Mischler, H.; Jost, J.; Piazza, D.; Pommerol, A.; Rieder, M.; Roloff, V.; Servonet, A.; Trottmann, W.; Uthaicharoenpong, T.; Zimmermann, C.; Vernani, D.; Johnson, M.; Pelò, E.; Weigel, T.; Viertl, J.; De Roux, N.; Lochmatter, P.; Sutter, G.; Casciello, A.; Hausner, T.; Ficai Veltroni, I.; Da Deppo, V.; Orleanski, P.; Nowosielski, W.; Zawistowski, T.; Szalai, S.; Sodor, B.; Tulyakov, S.; Troznai, G.; Banaskiewicz, M.; Bridges, J.C.; Byrne, S.; Debei, S.; El-Maarry, M. R.; Hauber, E.; Hansen, C.J.; Ivanov, A.; Keszthelyil, L.; Kirk, Randolph L.; Kuzmin, R.; Mangold, N.; Marinangeli, L.; Markiewicz, W. J.; Massironi, M.; McEwen, A.S.; Okubo, Chris H.; Tornabene, L.L.; Wajer, P.; Wray, J.J.

    2017-01-01

    The Colour and Stereo Surface Imaging System (CaSSIS) is the main imaging system onboard the European Space Agency’s ExoMars Trace Gas Orbiter (TGO) which was launched on 14 March 2016. CaSSIS is intended to acquire moderately high resolution (4.6 m/pixel) targeted images of Mars at a rate of 10–20 images per day from a roughly circular orbit 400 km above the surface. Each image can be acquired in up to four colours and stereo capability is foreseen by the use of a novel rotation mechanism. A typical product from one image acquisition will be a 9.5 km×∼45 km">9.5 km×∼45 km9.5 km×∼45 km swath in full colour and stereo in one over-flight of the target thereby reducing atmospheric influences inherent in stereo and colour products from previous high resolution imagers. This paper describes the instrument including several novel technical solutions required to achieve the scientific requirements.

  8. An improved algorithm for de-striping of ocean colour monitor imageries aided by measured sensor characteristics

    NASA Astrophysics Data System (ADS)

    Dutt, Ashutosh; Mishra, Ashish; Goswami, D. R.; Kumar, A. S. Kiran

    2016-05-01

    The push-broom sensors in bands meant to study oceans, in general suffer from residual non uniformity even after radiometric correction. The in-orbit data from OCM-2 shows pronounced striping in lower bands. There have been many attempts and different approaches to solve the problem using image data itself. The success or lack of it of each algorithm lies on the quality of the uniform region identified. In this paper, an image based destriping algorithm is presented with constraints being derived from Ground Calibration exercise. The basis of the methodology is determination of pixel to pixel non-uniformity through uniform segments identified and collected from large number of images, covering the dynamic range of the sensor. The results show the effectiveness of the algorithm over different targets. The performance is qualitatively evaluated by visual inspection and quantitatively measured by two parameters.

  9. Photographic monitoring of soiling and decay of roadside walls in central Oxford, England

    NASA Astrophysics Data System (ADS)

    Thornbush, Mary J.; Viles, Heather A.

    2008-12-01

    As part of the Environmental Monitoring of Integrated Transport Strategies (EMITS) project, which examined the impact of the Oxford Transport Strategy (OTS) on the soiling and decay of buildings and structures in central Oxford, England, a simple photographic survey of a sample of roadside walls was carried out in 1997, with re-surveys in 1999 and 2003. Thirty photographs were taken each time, covering an area of stonework approximately 30 × 30 cm in dimensions at 1-1.3 m above pavement level. The resulting images have been used to investigate, both qualitatively as well as quantitatively, the progression of soiling and decay. Comparison of images by eye reveals a number of minor changes in soiling and decay patterns, but generally indicates stability except at one site where dramatic, superficial damage occurred over 2 years. Quantitative analysis of decay features (concavities resulting from surface blistering, flaking, and scaling), using simple techniques in Adobe Photoshop, shows variable pixel-based size proportions of concavities across 6 years of survey. Colour images (in Lab Color) generally have a reduced proportion of pixels, representing decay features in comparison to black and white (Grayscale) images. The study conveys that colour images provide more information both for general observations of soiling and decay patterns and for segmentation of decay-produced concavities. The study indicates that simple repeat photography can reveal useful information about changing patterns of both soiling and decay, although unavoidable variation in external lighting conditions between re-surveys is a factor limiting the accuracy of change detection.

  10. Seamless Image Mosaicking via Synchronization

    NASA Astrophysics Data System (ADS)

    Santellani, E.; Maset, E.; Fusiello, A.

    2018-05-01

    This paper proposes an innovative method to create high-quality seamless planar mosaics. The developed pipeline ensures good robustness against many common mosaicking problems (e.g., misalignments, colour distortion, moving objects, parallax) and differs from other works in the literature because a global approach, known as synchronization, is used for image registration and colour correction. To better conceal the mosaic seamlines, images are cut along specific paths, computed using a Voronoi decomposition of the mosaic area and a shortest path algorithm. Results obtained on challenging real datasets show that the colour correction mitigates significantly the colour variations between the original images and the seams on the final mosaic are not evident.

  11. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  12. Colour Vision: Understanding #TheDress.

    PubMed

    Brainard, David H; Hurlbert, Anya C

    2015-06-29

    A widely-viewed image of a dress elicits striking individual variation in colour perception. Experiments with multiple variants of the image suggest that the individual differences may arise through the action of visual mechanisms that normally stabilise object colour. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography

    NASA Astrophysics Data System (ADS)

    Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori

    2014-02-01

    In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.

  14. A Fast SVM-Based Tongue's Colour Classification Aided by k-Means Clustering Identifiers and Colour Attributes as Computer-Assisted Tool for Tongue Diagnosis.

    PubMed

    Kamarudin, Nur Diyana; Ooi, Chia Yee; Kawanabe, Tadaaki; Odaguchi, Hiroshi; Kobayashi, Fuminori

    2017-01-01

    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k -means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k -means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.

  15. A Fast SVM-Based Tongue's Colour Classification Aided by k-Means Clustering Identifiers and Colour Attributes as Computer-Assisted Tool for Tongue Diagnosis

    PubMed Central

    Ooi, Chia Yee; Kawanabe, Tadaaki; Odaguchi, Hiroshi; Kobayashi, Fuminori

    2017-01-01

    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds. PMID:29065640

  16. Adaptive marginal median filter for colour images.

    PubMed

    Morillas, Samuel; Gregori, Valentín; Sapena, Almanzor

    2011-01-01

    This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.

  17. An RGB approach to prismatic colours

    NASA Astrophysics Data System (ADS)

    Theilmann, Florian; Grusche, Sascha

    2013-11-01

    Teaching prismatic colours usually boils down to establishing the take-home message that white light consists of ‘differently refrangible’ coloured rays. This approach explains the classical spectrum of seven colours but has its limitations, e.g. in discussing spectra from setups with higher resolution or in understanding the well saturated colours of simple edge spectra. Besides, the connection of physical wavelength and colour remains obscure—after all, colour and wavelength are not equivalent. In this paper, we suggest that teachers demonstrate these impressive experiments in the classroom by using a video projector and a prism to disperse black-and-white slit images. We introduce experimental and diagrammatic methods for establishing the connection between the original slit image and the wavelength composition of the resulting spectrum. From this (or any other given) wavelength composition, students can systematically derive the colours with a simple RGB approach, thus gaining a more accurate picture of the relation between wavelength and colour.

  18. Gender differences in colour naming performance for gender specific body shape images.

    PubMed

    Elliman, N A; Green, M W; Wan, W K

    1998-03-01

    Males are increasingly subjected to pressures to conform to aesthetic body stereotypes. There is, however, comparatively little published research on the aetiology of male body shape concerns. Two experiments are presented, which investigate the relationship between gender specific body shape concerns and colour-naming performance. Each study comprised a between subject design, in which each subject was tested on a single occasion. A pictorial version of a modified Stroop task was used in both studies. Subjects colour-named gender specific obese and thin body shape images and semantically homogeneous neutral images (birds) presented in a blocked format. The first experiment investigated female subjects (N = 68) and the second investigated males (N = 56). Subjects also completed a self-report measure of eating behaviour. Currently dieting female subjects exhibited significant colour-naming differences between obese and neutral images. A similar pattern of colour-naming performance was found to be related to external eating in the male subjects.

  19. Incorporating Colour Information for Computer-Aided Diagnosis of Melanoma from Dermoscopy Images: A Retrospective Survey and Critical Analysis

    PubMed Central

    Drew, Mark S.

    2016-01-01

    Cutaneous melanoma is the most life-threatening form of skin cancer. Although advanced melanoma is often considered as incurable, if detected and excised early, the prognosis is promising. Today, clinicians use computer vision in an increasing number of applications to aid early detection of melanoma through dermatological image analysis (dermoscopy images, in particular). Colour assessment is essential for the clinical diagnosis of skin cancers. Due to this diagnostic importance, many studies have either focused on or employed colour features as a constituent part of their skin lesion analysis systems. These studies range from using low-level colour features, such as simple statistical measures of colours occurring in the lesion, to availing themselves of high-level semantic features such as the presence of blue-white veil, globules, or colour variegation in the lesion. This paper provides a retrospective survey and critical analysis of contributions in this research direction. PMID:28096807

  20. Digital colour management system for colour parameters reconstruction

    NASA Astrophysics Data System (ADS)

    Grudzinski, Karol; Lasmanowicz, Piotr; Assis, Lucas M. N.; Pawlicka, Agnieszka; Januszko, Adam

    2013-10-01

    Digital Colour Management System (DCMS) and its application to new adaptive camouflage system are presented in this paper. The DCMS is a digital colour rendering method which would allow for transformation of a real image into a set of colour pixels displayed on a computer monitor. Consequently, it can analyse pixels' colour which comprise images of the environment such as desert, semi-desert, jungle, farmland or rocky mountain in order to prepare an adaptive camouflage pattern most suited for the terrain. This system is described in present work as well as the use the subtractive colours mixing method to construct the real time colour changing electrochromic window/pixel (ECD) for camouflage purpose. The ECD with glass/ITO/Prussian Blue(PB)/electrolyte/CeO2-TiO2/ITO/glass configuration was assembled and characterized. The ECD switched between green and yellow after +/-1.5 V application and the colours have been controlled by Digital Colour Management System and described by CIE LAB parameters.

  1. Analysis of the color rendition of flexible endoscopes

    NASA Astrophysics Data System (ADS)

    Murphy, Edward M.; Hegarty, Francis J.; McMahon, Barry P.; Boyle, Gerard

    2003-03-01

    Endoscopes are imaging devices routinely used for the diagnosis of disease within the human digestive tract. Light is transmitted into the body cavity via incoherent fibreoptic bundles and is controlled by a light feedback system. Fibreoptic endoscopes use coherent fibreoptic bundles to provide the clinician with an image. It is also possible to couple fibreoptic endoscopes to a clip-on video camera. Video endoscopes consist of a small CCD camera, which is inserted into gastrointestinal tract, and associated image processor to convert the signal to analogue RGB video signals. Images from both types of endoscope are displayed on standard video monitors. Diagnosis is dependent upon being able to determine changes in the structure and colour of tissues and biological fluids, and therefore is dependent upon the ability of the endoscope to reproduce the colour of these tissues and fluids with fidelity. This study investigates the colour reproduction of flexible optical and video endoscopes. Fibreoptic and video endoscopes alter image colour characteristics in different ways. The colour rendition of fibreoptic endoscopes was assessed by coupling them to a video camera and applying video colorimetric techniques. These techniques were then used on video endoscopes to assess how the colour rendition of video endoscopes compared with that of optical endoscopes. In both cases results were obtained at fixed illumination settings. Video endoscopes were then assessed with varying levels of illumination. Initial results show that at constant luminance endoscopy systems introduce non-linear shifts in colour. Techniques for examining how this colour shift varies with illumination intensity were developed and both methodology and results will be presented. We conclude that more rigorous quality assurance is required to reduce colour error and are developing calibration procedures applicable to medical endoscopes.

  2. Attention during adaptation weakens negative afterimages of perceptually colour-spread surfaces.

    PubMed

    Lak, Armin

    2008-06-01

    The visual system can complete coloured surfaces from stimulus fragments, inducing the subjective perception of a colour-spread figure. Negative afterimages of these induced colours were first reported by S. Shimojo, Y. Kamitani, and S. Nishida (2001). Two experiments were conducted to examine the effect of attention on the duration of these afterimages. The results showed that shifting attention to the colour-spread figure during the adaptation phase weakened the subsequent afterimage. On the basis of previous findings that the duration of these afterimages is correlated with the strength of perceptual filling-in (grouping) among local inducers during the adaptation phase, it is proposed that attention weakens perceptual filling-in during the adaptation phase and thereby prevents the stimulus from being segmented into an illusory figure. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  3. First CaSSIS Colour Images of Mars

    NASA Astrophysics Data System (ADS)

    Alfred, M.; Pommerol, A.; Thomas, N.; Cremonese, G.

    2017-12-01

    The Colour and Stereo Surface Imaging System (CaSSIS) on board ESA's Exomars Trace Gas Orbiter has acquired its first images of the surface of Mars on the 22nd and 26th of November, 2016. This commissioning campaign on the initial capture orbit was highly successful, allowing us to test the instrument, establish its performance and collect detailed images from the surface. Many of them have been publicly released within days following acquisition. These images and other commissioning data have demonstrated that the capabilities of the instrument are fully in-line with expectation. Although a colour image of Phobos produced from observations acquired on the 26th of November was rapidly released, the calibration and production of colour images from the surface of Mars proved to be more challenging. Having fixed technical issues, acquired and processed necessary in-flight calibration data, we have recently recalibrated the whole dataset, improving significantly the quality of the data and allowing us, for the first time, to produce high-quality colour images from the surface of Mars with CaSSIS data. The absolute calibration of the instrument is currently verified using stellar observations but the values of reflectivity obtained in each of the four colour channels for the surfaces of Mars and Phobos already show good consistency with other orbital data. The timing of CaSSIS acquisitions is very accurate and results in good colour matching, as already verified on-ground during the calibration campaign. The first few images acquired on the 22nd of November, shortly after TGO crossed the morning terminator, show unique views of the dusty terrains of the Tharsis region with solar incidence angle ranging between 60° and 80°. Comparison with images of the same areas acquired at later local times by other orbiters shows intriguing differences, related in particular to the brightness and colour of the floor of dust-filled craters that look bluer in the morning than in the afternoon. These observations and possible explanations of these changes in terms of diurnal volatile cycles will be presented and discussed, providing a glimpse into the future scientific activities permitted by CaSSIS once the nominal science phase begins in 2018.

  4. GEMAS: Colours of dry and moist agricultural soil samples of Europe

    NASA Astrophysics Data System (ADS)

    Klug, Martin; Fabian, Karl; Reimann, Clemens

    2016-04-01

    High resolution HDR colour images of all Ap samples from the GEMAS survey were acquired using a GeoTek Linescan camera. Three measurements of dry and wet samples with increasing exposure time and increasing illumination settings produced a set of colour images at 50μm resolution. Automated image processing was used to calibrate the six images per sample with respect to the synchronously measured X-Rite colorchecker chart. The calibrated images were then fit to Munsell soil colours that were measured in the same way. The results provide overview maps of dry and moist European soil colours. Because colour is closely linked to iron mineralogy, carbonate, silicate and organic carbon content the results can be correlated to magnetic, mineralogical, and geochemical properties. In combination with the full GEMAS chemical and physical measurements, this yields a valuable data set for calibration and interpretation of visible satellite colour data with respect to chemical composition and geological background, soil moisture, and soil degradation. This data set will help to develop new methods for world-wide characterization and monitoring of agricultural soils which is essential for quantifying geologic and human impact on the critical zone environment. It furthermore enables the scientific community and governmental authorities to monitor consequences of climatic change, to plan and administrate economic and ecological land use, and to use the data set for forensic applications.

  5. Three-channel false colour AFM images for improved interpretation of complex surfaces: a study of filamentous cyanobacteria.

    PubMed

    Kurk, Toby; Adams, David G; Connell, Simon D; Thomson, Neil H

    2010-05-01

    Imaging signals derived from the atomic force microscope (AFM) are typically presented as separate adjacent images with greyscale or pseudo-colour palettes. We propose that information-rich false-colour composites are a useful means of presenting three-channel AFM image data. This method can aid the interpretation of complex surfaces and facilitate the perception of information that is convoluted across data channels. We illustrate this approach with images of filamentous cyanobacteria imaged in air and under aqueous buffer, using both deflection-modulation (contact) mode and amplitude-modulation (tapping) mode. Topography-dependent contrast in the error and tertiary signals aids the interpretation of the topography signal by contributing additional data, resulting in a more detailed image, and by showing variations in the probe-surface interaction. Moreover, topography-independent contrast and topography-dependent contrast in the tertiary data image (phase or friction) can be distinguished more easily as a consequence of the three dimensional colour-space.

  6. Development of a table tennis robot for ball interception using visual feedback

    NASA Astrophysics Data System (ADS)

    Parnichkun, Manukid; Thalagoda, Janitha A.

    2016-07-01

    This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.

  7. Ultrahigh field magnetic resonance and colour Doppler real-time fusion imaging of the orbit--a hybrid tool for assessment of choroidal melanoma.

    PubMed

    Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver

    2014-05-01

    A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.

  8. Continuous tone printing in silicone from CNC milled matrices

    NASA Astrophysics Data System (ADS)

    Hoskins, S.; McCallion, P.

    2014-02-01

    Current research at the Centre for Fine Print Research (CFPR) at the University of the West of England, Bristol, is exploring the potential of creating coloured pictorial imagery from a continuous tone relief surface. To create the printing matrices the research team have been using CNC milled images where the height of the relief image is dictated by creating a tone curve and then milling this curve into a series of relief blocks from which the image is cast in a silicone ink. A translucent image is cast from each of the colour matrices and each colour is assembled - one on top of another - resulting is a colour continuous tone print, where colour tone is created by physical depth of colour. This process is a contemporary method of continuous tone colour printing based upon the Nineteenth Century black and white printing process of Woodburytype as developed by Walter Bentley Woodbury in 1865. Woodburytype is the only true continuous tone printing process invented, and although its delicate and subtle surfaces surpassed all other printing methods at the time. The process died out in the late nineteenth century as more expedient and cost effective methods of printing prevailed. New research at CFPR builds upon previous research that combines 19th Century Photomechanical techniques with digital technology to reappraise the potential of these processes.

  9. A Comparative Study on Diagnostic Accuracy of Colour Coded Digital Images, Direct Digital Images and Conventional Radiographs for Periapical Lesions – An In Vitro Study

    PubMed Central

    Mubeen; K.R., Vijayalakshmi; Bhuyan, Sanat Kumar; Panigrahi, Rajat G; Priyadarshini, Smita R; Misra, Satyaranjan; Singh, Chandravir

    2014-01-01

    Objectives: The identification and radiographic interpretation of periapical bone lesions is important for accurate diagnosis and treatment. The present study was undertaken to study the feasibility and diagnostic accuracy of colour coded digital radiographs in terms of presence and size of lesion and to compare the diagnostic accuracy of colour coded digital images with direct digital images and conventional radiographs for assessing periapical lesions. Materials and Methods: Sixty human dry cadaver hemimandibles were obtained and periapical lesions were created in first and second premolar teeth at the junction of cancellous and cortical bone using a micromotor handpiece and carbide burs of sizes 2, 4 and 6. After each successive use of round burs, a conventional, RVG and colour coded image was taken for each specimen. All the images were evaluated by three observers. The diagnostic accuracy for each bur and image mode was calculated statistically. Results: Our results showed good interobserver (kappa > 0.61) agreement for the different radiographic techniques and for the different bur sizes. Conventional Radiography outperformed Digital Radiography in diagnosing periapical lesions made with Size two bur. Both were equally diagnostic for lesions made with larger bur sizes. Colour coding method was least accurate among all the techniques. Conclusion: Conventional radiography traditionally forms the backbone in the diagnosis, treatment planning and follow-up of periapical lesions. Direct digital imaging is an efficient technique, in diagnostic sense. Colour coding of digital radiography was feasible but less accurate however, this imaging technique, like any other, needs to be studied continuously with the emphasis on safety of patients and diagnostic quality of images. PMID:25584318

  10. The "Human Colour" Crayon: Investigating the Attitudes and Perceptions of Learners Regarding Race and Skin Colour

    ERIC Educational Resources Information Center

    Alexander, Neeske; Costandius, Elmarie

    2017-01-01

    Some coloured and black learners in South Africa use a light orange or pink crayon to represent themselves in art. Many learners name this colour "human colour" or "skin colour". This is troublesome, because it could reflect exclusionary ways of representing race in images and language. This case study, conducted with two…

  11. Automatic colorimetric calibration of human wounds

    PubMed Central

    2010-01-01

    Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly significant improvement of reproducibility. In the second experiment, the reproducibility of the chart detection during automatic calibration is presented using a probability distribution of dE_ab errors between 2 measurements of the same ROI. Conclusion The investigators proposed an automatic colour calibration algorithm that ensures reproducible colour content of digital images. Evidence was provided that images taken with commercially available digital cameras can be calibrated independently of any camera settings and illumination features. PMID:20298541

  12. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    PubMed

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  13. Classification of pre-sliced pork and Turkey ham qualities based on image colour and textural features and their relationships with consumer responses.

    PubMed

    Iqbal, Abdullah; Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul

    2010-03-01

    Images of three qualities of pre-sliced pork and Turkey hams were evaluated for colour and textural features to characterize and classify them, and to model the ham appearance grading and preference responses of a group of consumers. A total of 26 colour features and 40 textural features were extracted for analysis. Using Mahalanobis distance and feature inter-correlation analyses, two best colour [mean of S (saturation in HSV colour space), std. deviation of b*, which indicates blue to yellow in L*a*b* colour space] and three textural features [entropy of b*, contrast of H (hue of HSV colour space), entropy of R (red of RGB colour space)] for pork, and three colour (mean of R, mean of H, std. deviation of a*, which indicates green to red in L*a*b* colour space) and two textural features [contrast of B, contrast of L* (luminance or lightness in L*a*b* colour space)] for Turkey hams were selected as features with the highest discriminant power. High classification performances were reached for both types of hams (>99.5% for pork and >90.5% for Turkey) using the best selected features or combinations of them. In spite of the poor/fair agreement among ham consumers as determined by Kappa analysis (Kappa-value<0.4) for sensory grading (surface colour, colour uniformity, bitonality, texture appearance and acceptability), a dichotomous logistic regression model using the best image features was able to explain the variability of consumers' responses for all sensorial attributes with accuracies higher than 74.1% for pork hams and 83.3% for Turkey hams. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  15. Digital enhancement of haematoxylin- and eosin-stained histological images for red-green colour-blind observers.

    PubMed

    Landini, G; Perryer, G

    2009-06-01

    Individuals with red-green colour-blindness (CB) commonly experience great difficulty differentiating between certain histological stain pairs, notably haematoxylin-eosin (H&E). The prevalence of red-green CB is high (6-10% of males), including among medical and laboratory personnel, and raises two major concerns: first, accessibility and equity issues during the education and training of individuals with this disability, and second, the likelihood of errors in critical tasks such as interpreting histological images. Here we show two methods to enhance images of H&E-stained samples so the differently stained tissues can be well discriminated by red-green CBs while remaining usable by people with normal vision. Method 1 involves rotating and stretching the range of H&E hues in the image to span the perceptual range of the CB observers. Method 2 digitally unmixes the original dyes using colour deconvolution into two separate images and repositions the information into hues that are more distinctly perceived. The benefits of these methods were tested in 36 volunteers with normal vision and 11 with red-green CB using a variety of H&E stained tissue sections paired with their enhanced versions. CB subjects reported they could better perceive the different stains using the enhanced images for 85% of preparations (method 1: 90%, method 2: 73%), compared to the H&E-stained original images. Many subjects with normal vision also preferred the enhanced images to the original H&E. The results suggest that these colour manipulations confer considerable advantage for those with red-green colour vision deficiency while not disadvantaging people with normal colour vision.

  16. Application of a digital technique in evaluating the reliability of shade guides.

    PubMed

    Cal, E; Sonugelen, M; Guneri, P; Kesercioglu, A; Kose, T

    2004-05-01

    There appears to be a need for a reliable method for quantification of tooth colour and analysis of shade. Therefore, the primary objective of this study was to show the applicability of graphic software in colour analysis and secondly to investigate the reliability of commercial shade guides produced by the same manufacturer, using this digital technique. After confirming the reliability and reproducibility of the digital method by using self-assessed coloured images, three shade guides of the same manufacturer were photographed in daylight and in studio environments with a digital camera and saved in tagged image file format (TIFF) format. Colour analysis of each photograph was performed using the Adobe Photoshop 4.0 graphic program. Luminosity, and red, green, blue (L and RGB) values of each shade tab of each shade guide were measured and the data were subjected to statistical analysis using the repeated measure Anova test. The L and RGB values of the images taken in daylight differed significantly from those of the images taken in studio environment (P < 0.05). In both environments, the luminosity and red values of the shade tabs were significantly different from each other (P < 0.05). It was concluded that, when the environmental conditions were kept constant, the Adobe Photoshop 4.0 colour analysis program could be used to analyse the colour of images. On the other hand, the results revealed that the accuracy of shade tabs widely being used in colour matching should be readdressed.

  17. Identification of the optic nerve head with genetic algorithms.

    PubMed

    Carmona, Enrique J; Rincón, Mariano; García-Feijoó, Julián; Martínez-de-la-Casa, José M

    2008-07-01

    This work proposes creating an automatic system to locate and segment the optic nerve head (ONH) in eye fundus photographic images using genetic algorithms. Domain knowledge is used to create a set of heuristics that guide the various steps involved in the process. Initially, using an eye fundus colour image as input, a set of hypothesis points was obtained that exhibited geometric properties and intensity levels similar to the ONH contour pixels. Next, a genetic algorithm was used to find an ellipse containing the maximum number of hypothesis points in an offset of its perimeter, considering some constraints. The ellipse thus obtained is the approximation to the ONH. The segmentation method is tested in a sample of 110 eye fundus images, belonging to 55 patients with glaucoma (23.1%) and eye hypertension (76.9%) and random selected from an eye fundus image base belonging to the Ophthalmology Service at Miguel Servet Hospital, Saragossa (Spain). The results obtained are competitive with those in the literature. The method's generalization capability is reinforced when it is applied to a different image base from the one used in our study and a discrepancy curve is obtained very similar to the one obtained in our image base. In addition, the robustness of the method proposed can be seen in the high percentage of images obtained with a discrepancy delta<5 (96% and 99% in our and a different image base, respectively). The results also confirm the hypothesis that the ONH contour can be properly approached with a non-deformable ellipse. Another important aspect of the method is that it directly provides the parameters characterising the shape of the papilla: lengths of its major and minor axes, its centre of location and its orientation with regard to the horizontal position.

  18. Organ dose conversion coefficients based on a voxel mouse model and MCNP code for external photon irradiation.

    PubMed

    Zhang, Xiaomin; Xie, Xiangdong; Cheng, Jie; Ning, Jing; Yuan, Yong; Pan, Jie; Yang, Guoshan

    2012-01-01

    A set of conversion coefficients from kerma free-in-air to the organ absorbed dose for external photon beams from 10 keV to 10 MeV are presented based on a newly developed voxel mouse model, for the purpose of radiation effect evaluation. The voxel mouse model was developed from colour images of successive cryosections of a normal nude male mouse, in which 14 organs or tissues were segmented manually and filled with different colours, while each colour was tagged by a specific ID number for implementation of mouse model in Monte Carlo N-particle code (MCNP). Monte Carlo simulation with MCNP was carried out to obtain organ dose conversion coefficients for 22 external monoenergetic photon beams between 10 keV and 10 MeV under five different irradiation geometries conditions (left lateral, right lateral, dorsal-ventral, ventral-dorsal, and isotropic). Organ dose conversion coefficients were presented in tables and compared with the published data based on a rat model to investigate the effect of body size and weight on the organ dose. The calculated and comparison results show that the organ dose conversion coefficients varying the photon energy exhibits similar trend for most organs except for the bone and skin, and the organ dose is sensitive to body size and weight at a photon energy approximately <0.1 MeV.

  19. Colour Based Image Processing Method for Recognizing Ribbed Smoked Sheet Grade

    NASA Astrophysics Data System (ADS)

    Fibriani, Ike; Sumardi; Bayu Satriya, Alfredo; Budi Utomo, Satryo

    2017-03-01

    This research proposes a colour based image processing technique to recognize the Ribbed Smoked Sheet (RSS) grade so that the RSS sorting process can be faster and more accurate than the traditional one. The RSS sheet image captured by the camera is transformed into grayscale image to simplify the recognition of rust and mould on the RSS sheet. Then the grayscale image is transformed into binary image using threshold value which is obtained from the RSS 1 reference colour. The grade recognition is determined by counting the white pixel percentage. The result shows that the system has 88% of accuracy. Most faults exist on RSS 2 recognition. This is due to the illumination distribution which is not equal over the RSS image.

  20. Simultaneous three wavelength imaging with a scanning laser ophthalmoscope.

    PubMed

    Reinholz, F; Ashman, R A; Eikelboom, R H

    1999-11-01

    Various imaging properties of scanning laser ophthalmoscopes (SLO) such as contrast or depth discrimination, are superior to those of the traditional photographic fundus camera. However, most SLO are monochromatic whereas photographic systems produce colour images, which inherently contain information over a broad wavelength range. An SLO system has been modified to allow simultaneous three channel imaging. Laser light sources in the visible and infrared spectrum were concurrently launched into the system. Using different wavelength triads, digital fundus images were acquired at high frame rates. Favourable wavelengths combinations were established and high contrast, true (red, green, blue) or false (red, green, infrared) colour images of the retina were recorded. The monochromatic frames which form the colour image exhibit improved distinctness of different retinal structures such as the nerve fibre layer, the blood vessels, and the choroid. A multi-channel SLO combines the advantageous imaging properties of a tunable, monochrome SLO with the benefits and convenience of colour ophthalmoscopy. The options to modify parameters such as wavelength, intensity, gain, beam profile, aperture sizes, independently for every channel assign a high degree of versatility to the system. Copyright 1999 Wiley-Liss, Inc.

  1. Automated quantification of renal interstitial fibrosis for computer-aided diagnosis: A comprehensive tissue structure segmentation method.

    PubMed

    Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon

    2018-03-01

    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Bleeding detection in wireless capsule endoscopy using adaptive colour histogram model and support vector classification

    NASA Astrophysics Data System (ADS)

    Mackiewicz, Michal W.; Fisher, Mark; Jamieson, Crawford

    2008-03-01

    Wireless Capsule Endoscopy (WCE) is a colour imaging technology that enables detailed examination of the interior of the gastrointestinal tract. A typical WCE examination takes ~ 8 hours and captures ~ 40,000 useful images. After the examination, the images are viewed as a video sequence, which generally takes a clinician over an hour to analyse. The manufacturers of the WCE provide certain automatic image analysis functions e.g. Given Imaging offers in their Rapid Reader software: The Suspected Blood Indicator (SBI), which is designed to report the location in the video of areas of active bleeding. However, this tool has been reported to have insufficient specificity and sensitivity. Therefore it does not free the specialist from reviewing the entire footage and was suggested only to be used as a fast screening tool. In this paper we propose a method of bleeding detection that uses in its first stage Hue-Saturation-Intensity colour histograms to track a moving background and bleeding colour distributions over time. Such an approach addresses the problem caused by drastic changes in blood colour distribution that occur when it is altered by gastrointestinal fluids and allow detection of other red lesions, which although are usually "less red" than fresh bleeding, they can still be detected when the difference between their colour distributions and the background is large enough. In the second stage of our method, we analyse all candidate blood frames, by extracting colour (HSI) and texture (LBP) features from the suspicious image regions (obtained in the first stage) and their neighbourhoods and classifying them using Support Vector Classifier into Bleeding, Lesion and Normal classes. We show that our algorithm compares favourably with the SBI on the test set of 84 full length videos.

  3. Tissue Doppler imaging and echo-Doppler findings associated with a mitral valve stenosis with an immobile posterior valve leaflet in a bull terrier.

    PubMed

    Tidholm, A; Nicolle, A P; Carlos, C; Gouni, V; Caruso, J L; Pouchelon, J L; Chetboul, V

    2004-04-01

    A mitral valve stenosis was diagnosed in a 2-year-old female Bull Terrier by use of two-dimensional (2-D) and M-mode echocardiography, colour-flow imaging and spectral Doppler examinations. Tissue Doppler Imaging was also performed to assess the segmental radial myocardial motion. The mitral valve stenosis was characterized by a decreased mitral orifice area/left ventricle area ratio (0.14), an increased early diastolic flow velocity (E wave = 1.9 m/s), a prolonged pressure half-time (106 ms) and a decreased E-F slope (4.5 cm/s) on pulsed-wave Doppler examination. This mitral stenosis was associated with an immobile posterior leaflet, as seen on 2-D and M-mode echocardiography. Immobility of the posterior mitral leaflet is considered to be a rare finding in humans and, to our knowledge, has not been precisely documented in dogs with mitral valve stenosis.

  4. Colour helps to solve the binocular matching problem

    PubMed Central

    den Ouden, HEM; van Ee, R; de Haan, EHF

    2005-01-01

    The spatial differences between the two retinal images, called binocular disparities, can be used to recover the three-dimensional (3D) aspects of a scene. The computation of disparity depends upon the correct identification of corresponding features in the two images. Understanding what image features are used by the brain to solve this binocular matching problem is an important issue in research on stereoscopic vision. The role of colour in binocular vision is controversial and it has been argued that colour is ineffective in achieving binocular vision. In the current experiment subjects were required to indicate the amount of perceived depth. The stimulus consisted of an array of fronto-parallel bars uniformly distributed in a constant sized volume. We studied the perceived depth in those 3D stimuli by manipulating both colour (monochrome, trichrome) and luminance (congruent, incongruent). Our results demonstrate that the amount of perceived depth was influenced by colour, indicating that the visual system uses colour to achieve binocular matching. Physiological data have revealed cortical cells in macaque V2 that are tuned both to binocular disparity and to colour. We suggest that one of the functional roles of these cells may be to help solve the binocular matching problem. PMID:15975983

  5. Colour helps to solve the binocular matching problem.

    PubMed

    den Ouden, H E M; van Ee, R; de Haan, E H F

    2005-09-01

    The spatial differences between the two retinal images, called binocular disparities, can be used to recover the three-dimensional (3D) aspects of a scene. The computation of disparity depends upon the correct identification of corresponding features in the two images. Understanding what image features are used by the brain to solve this binocular matching problem is an important issue in research on stereoscopic vision. The role of colour in binocular vision is controversial and it has been argued that colour is ineffective in achieving binocular vision. In the current experiment subjects were required to indicate the amount of perceived depth. The stimulus consisted of an array of fronto-parallel bars uniformly distributed in a constant sized volume. We studied the perceived depth in those 3D stimuli by manipulating both colour (monochrome, trichrome) and luminance (congruent, incongruent). Our results demonstrate that the amount of perceived depth was influenced by colour, indicating that the visual system uses colour to achieve binocular matching. Physiological data have revealed cortical cells in macaque V2 that are tuned both to binocular disparity and to colour. We suggest that one of the functional roles of these cells may be to help solve the binocular matching problem.

  6. A review of tooth colour and whiteness.

    PubMed

    Joiner, Andrew; Hopkinson, Ian; Deng, Yan; Westland, Stephen

    2008-01-01

    To review current knowledge on the definition of tooth whiteness and its application within dentistry, together with the measured range of tooth colours. 'Medline' and 'ISI Web of Sciences' databases were searched electronically with key words tooth, teeth, colour, colour, white and whiteness. The application of colour science within dentistry has permitted the measurement of tooth colour in an objective way, with the most common colour space in current use being the CIELAB (Commission Internationale de l'Eclairage). Indeed, many investigators from a range of different countries have reported L*, a* and b* values for teeth measured in vivo using instrumental techniques such as spectrophotometers, colorimeters and image analysis of digital images. In general, these studies show a large range in L*, a* and b* values, but consistently show that there is a significant contribution of b* value or yellowness in natural tooth colour. Further developments in colour science have lead to the description of tooth whiteness and changes in tooth whiteness based on whiteness indices, with the most relevant and applicable being the WIO whiteness index, a modified version of the CIE whiteness index.

  7. Quantitative phase imaging of human red blood cells using phase-shifting white light interference microscopy with colour fringe analysis

    NASA Astrophysics Data System (ADS)

    Singh Mehta, Dalip; Srivastava, Vishal

    2012-11-01

    We report quantitative phase imaging of human red blood cells (RBCs) using phase-shifting interference microscopy. Five phase-shifted white light interferograms are recorded using colour charge coupled device camera. White light interferograms were decomposed into red, green, and blue colour components. The phase-shifted interferograms of each colour were then processed by phase-shifting analysis and phase maps for red, green, and blue colours were reconstructed. Wavelength dependent refractive index profiles of RBCs were computed from the single set of white light interferogram. The present technique has great potential for non-invasive determination of refractive index variation and morphological features of cells and tissues.

  8. Rock images classification by using deep convolution neural network

    NASA Astrophysics Data System (ADS)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  9. Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.

    PubMed

    Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2018-01-24

    Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.

  10. Flame colour characterization in the visible and infrared spectrum using a digital camera and image processing

    NASA Astrophysics Data System (ADS)

    Huang, Hua-Wei; Zhang, Yang

    2008-08-01

    An attempt has been made to characterize the colour spectrum of methane flame under various burning conditions using RGB and HSV colour models instead of resolving the real physical spectrum. The results demonstrate that each type of flame has its own characteristic distribution in both the RGB and HSV space. It has also been observed that the averaged B and G values in the RGB model represent well the CH* and C*2 emission of methane premixed flame. Theses features may be utilized for flame measurement and monitoring. The great advantage of using a conventional camera for monitoring flame properties based on the colour spectrum is that it is readily available, easy to interface with a computer, cost effective and has certain spatial resolution. Furthermore, it has been demonstrated that a conventional digital camera is able to image flame not only in the visible spectrum but also in the infrared. This feature is useful in avoiding the problem of image saturation typically encountered in capturing the very bright sooty flames. As a result, further digital imaging processing and quantitative information extraction is possible. It has been identified that an infrared image also has its own distribution in both the RGB and HSV colour space in comparison with a flame image in the visible spectrum.

  11. Dual mode operation, highly selective nanohole array-based plasmonic colour filters

    NASA Astrophysics Data System (ADS)

    Fouladi Mahani, Fatemeh; Mokhtari, Arash; Mehran, Mahdiyeh

    2017-09-01

    Taking advantage of nanostructured metal films as plasmonic colour filters (PCFs) has been evolved remarkably as an alternative to the conventional technologies of chemical colour filtering. However, most of the proposed PCFs depict a poor colour purity focusing on generating either the additive or subtractive colours. In this paper, we present dual mode operation PCFs employing an opaque aluminium film patterned with sub-wavelength holes. Subtractive colours like cyan, magenta, and yellow are the results of reflection mode of these filters yielding optical efficiencies as high as 70%-80% and full width at half maximum of the stop-bands up to 40-50 nm. The colour selectivity of the transmission mode for the additive colours is also significant due to their enhanced performance through the utilization of a relatively thick aluminium film in contact with a modified dielectric environment. These filters provide a simple design with one-step lithography in addition to compatibility with the conventional CMOS processes. Moreover, they are polarization insensitive due to their symmetric geometry. A complete palette of pure subtractive and additive colours has been realized with potential applications, such as multispectral imaging, CMOS image sensors, displays, and colour printing.

  12. Dual mode operation, highly selective nanohole array-based plasmonic colour filters.

    PubMed

    Mahani, Fatemeh Fouladi; Mokhtari, Arash; Mehran, Mahdiyeh

    2017-09-20

    Taking advantage of nanostructured metal films as plasmonic colour filters (PCFs) has been evolved remarkably as an alternative to the conventional technologies of chemical colour filtering. However, most of the proposed PCFs depict a poor colour purity focusing on generating either the additive or subtractive colours. In this paper, we present dual mode operation PCFs employing an opaque aluminium film patterned with sub-wavelength holes. Subtractive colours like cyan, magenta, and yellow are the results of reflection mode of these filters yielding optical efficiencies as high as 70%-80% and full width at half maximum of the stop-bands up to 40-50 nm. The colour selectivity of the transmission mode for the additive colours is also significant due to their enhanced performance through the utilization of a relatively thick aluminium film in contact with a modified dielectric environment. These filters provide a simple design with one-step lithography in addition to compatibility with the conventional CMOS processes. Moreover, they are polarization insensitive due to their symmetric geometry. A complete palette of pure subtractive and additive colours has been realized with potential applications, such as multispectral imaging, CMOS image sensors, displays, and colour printing.

  13. Influence of Domain Shift Factors on Deep Segmentation of the Drivable Path of AN Autonomous Vehicle

    NASA Astrophysics Data System (ADS)

    Bormans, R. P. A.; Lindenbergh, R. C.; Karimi Nejadasl, F.

    2018-05-01

    One of the biggest challenges for an autonomous vehicle (and hence the WEpod) is to see the world as humans would see it. This understanding is the base for a successful and reliable future of autonomous vehicles. Real-world data and semantic segmentation generally are used to achieve full understanding of its surroundings. However, deploying a pretrained segmentation network to a new, previously unseen domain will not attain similar performance as it would on the domain where it is trained on due to the differences between the domains. Although research is done concerning the mitigation of this domain shift, the factors that cause these differences are not yet fully explored. We filled this gap with the investigation of several factors. A base network was created by a two-step finetuning procedure on a convolutional neural network (SegNet) which is pretrained on CityScapes (a dataset for semantic segmentation). The first tuning step is based on RobotCar (road scenery dataset recorded in Oxford, UK) while afterwards this network is fine-tuned for a second time but now on the KITTI (road scenery dataset recorded in Germany) dataset. With this base, experiments are used to obtain the importance of factors such as horizon line, colour and training order for a successful domain adaptation. In this case the domain adaptation is from the KITTI and RobotCar domain to the WEpod domain. For evaluation, groundtruth labels are created in a weakly-supervised setting. Negative influence was obtained for training on greyscale images instead of RGB images. This resulted in drops of IoU values up to 23.9 % for WEpod test images. The training order is a main contributor for domain adaptation with an increase in IoU of 4.7 %. This shows that the target domain (WEpod) is more closely related to RobotCar than to KITTI.

  14. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features.

    PubMed

    McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.

  15. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features

    PubMed Central

    McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469

  16. Optimisation approaches for concurrent transmitted light imaging during confocal microscopy.

    PubMed

    Collings, David A

    2015-01-01

    The transmitted light detectors present on most modern confocal microscopes are an under-utilised tool for the live imaging of plant cells. As the light forming the image in this detector is not passed through a pinhole, out-of-focus light is not removed. It is this extended focus that allows the transmitted light image to provide cellular and organismal context for fluorescence optical sections generated confocally. More importantly, the transmitted light detector provides images that have spatial and temporal registration with the fluorescence images, unlike images taken with a separately-mounted camera. Because plants often provide difficulties for taking transmitted light images, with the presence of pigments and air pockets in leaves, this study documents several approaches to improving transmitted light images beginning with ensuring that the light paths through the microscope are correctly aligned (Köhler illumination). Pigmented samples can be imaged in real colour using sequential scanning with red, green and blue lasers. The resulting transmitted light images can be optimised and merged in ImageJ to generate colour images that maintain registration with concurrent fluorescence images. For faster imaging of pigmented samples, transmitted light images can be formed with non-absorbed wavelengths. Transmitted light images of Arabidopsis leaves expressing GFP can be improved by concurrent illumination with green and blue light. If the blue light used for YFP excitation is blocked from the transmitted light detector with a cheap, coloured glass filters, the non-absorbed green light will form an improved transmitted light image. Changes in sample colour can be quantified by transmitted light imaging. This has been documented in red onion epidermal cells where changes in vacuolar pH triggered by the weak base methylamine result in measurable colour changes in the vacuolar anthocyanin. Many plant cells contain visible levels of pigment. The transmitted light detector provides a useful tool for documenting and measuring changes in these pigments while maintaining registration with confocal imaging.

  17. Ripening of salami: assessment of colour and aspect evolution using image analysis and multivariate image analysis.

    PubMed

    Fongaro, Lorenzo; Alamprese, Cristina; Casiraghi, Ernestina

    2015-03-01

    During ripening of salami, colour changes occur due to oxidation phenomena involving myoglobin. Moreover, shrinkage due to dehydration results in aspect modifications, mainly ascribable to fat aggregation. The aim of this work was the application of image analysis (IA) and multivariate image analysis (MIA) techniques to the study of colour and aspect changes occurring in salami during ripening. IA results showed that red, green, blue, and intensity parameters decreased due to the development of a global darker colour, while Heterogeneity increased due to fat aggregation. By applying MIA, different salami slice areas corresponding to fat and three different degrees of oxidised meat were identified and quantified. It was thus possible to study the trend of these different areas as a function of ripening, making objective an evaluation usually performed by subjective visual inspection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Colour detection thresholds in faces and colour patches.

    PubMed

    Tan, Kok Wei; Stephen, Ian D

    2013-01-01

    Human facial skin colour reflects individuals' underlying health (Stephen et al 2011 Evolution & Human Behavior 32 216-227); and enhanced facial skin CIELab b* (yellowness), a* (redness), and L* (lightness) are perceived as healthy (also Stephen et al 2009a International Journal of Primatology 30 845-857). Here, we examine Malaysian Chinese participants' detection thresholds for CIELab L* (lightness), a* (redness), and b* (yellowness) colour changes in Asian, African, and Caucasian faces and skin coloured patches. Twelve face photos and three skin coloured patches were transformed to produce four pairs of images of each individual face and colour patch with different amounts of red, yellow, or lightness, from very subtle (deltaE = 1.2) to quite large differences (deltaE = 9.6). Participants were asked to decide which of sequentially displayed, paired same-face images or colour patches were lighter, redder, or yellower. Changes in facial redness, followed by changes in yellowness, were more easily discriminated than changes in luminance. However, visual sensitivity was not greater for redness and yellowness in nonface stimuli, suggesting red facial skin colour special salience. Participants were also significantly better at recognizing colour differences in own-race (Asian) and Caucasian faces than in African faces, suggesting the existence of cross-race effect in discriminating facial colours. Humans' colour vision may have been selected for skin colour signalling (Changizi et al 2006 Biology Letters 2 217-221), enabling individuals to perceive subtle changes in skin colour, reflecting health and emotional status.

  19. Strategies for Prompt Searches for GRB Afterglows: The Discovery of GRB 001011 Optical/Near-Infrared Counterpart Using Colour-Colour Selection

    NASA Technical Reports Server (NTRS)

    Gorosabel, J.; Fynbo, J. U.; Hjorth, J.; Wolf, C.; Andersen, M. I.; Pedersen, H.; Christensen, L.; Jensen, B. L.; Moller, P.; Afonso, J.; hide

    2001-01-01

    We report the discovery of the optical and near-infrared counterpart to GRB 001011. The GRB 001011 error box determined by Beppo-SAX was simultaneously imaged in the near-infrared by the 3.58-m. New Technology Telescope and in the optical by the 1.54-m Danish Telescope - 8 hr after the gamma-ray event. We implement the colour-colour discrimination technique proposed by Rhoads (2001) and extend it using near-IR data as well. We present the results provided by an automatic colour-colour discrimination pipe-line developed to discern the different populations of objects present in the GRB 001011 error box. Our software revealed three candidates based on single-epoch images. Second-epoch observations carried out approx. 3.2 days after the burst revealed that the most likely candidate had faded thus identifying it with the counterpart to the GRB. In deep R-band images obtained 7 months after the burst a faint (R=25.38 plus or minus 0.25) elongated object, presumably the host galaxy of GRB 001011, was detected at the position of the afterglow. The GRB 001011 afterglow is the first discovered with the assistance of colour-colour diagram techniques. We discuss the advantages of using this method and its application to boxes determined by future missions.

  20. The assessment of cortisol in human hair: associations with sociodemographic variables and potential confounders.

    PubMed

    Dettenborn, L; Tietze, A; Kirschbaum, C; Stalder, T

    2012-11-01

    To inform the future use of hair cortisol measurement, we have investigated influences of potential confounding variables (natural hair colour, frequency of hair washes, age, sex, oral contraceptive (OC) use and smoking status) on hair cortisol levels. The main study sample comprised 360 participants (172 women) covering a wide range of ages (1-91 years; mean = 25.95). In addition, to more closely examine influences of natural hair colour and young age on hair cortisol levels, two additional samples comprising 69 participants with natural blond or dark brown hair (hair colour sample) as well as 28 young children and 34 adults (young age sample) were recruited. Results revealed a lack of an effect for natural hair colour, OC use, and smoking status on hair cortisol levels (all p's >0.10). No influence of frequency of hair washes was seen for proximal hair segments (p = 0.335) but for the third hair segment indicating lower cortisol content (p = 0.008). We found elevated hair cortisol levels in young children and older adults (p < 0.001). Finally, men showed higher hair cortisol levels than women (p = 0.002). The present data indicate that hair cortisol measurement provides a useful tool in stress-related psychobiological research when applied with the consideration of possible confounders including age and sex.

  1. Pseudo colour visualization of fused multispectral laser scattering images for optical diagnosis of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Zabarylo, U.; Minet, O.

    2010-01-01

    Investigations on the application of optical procedures for the diagnosis of rheumatism using scattered light images are only at the beginning both in terms of new image-processing methods and subsequent clinical application. For semi-automatic diagnosis using laser light, the multispectral scattered light images are registered and overlapped to pseudo-coloured images, which depict diagnostically essential contents by visually highlighting pathological changes.

  2. Synaesthetic colour in the brain: beyond colour areas. A functional magnetic resonance imaging study of synaesthetes and matched controls.

    PubMed

    van Leeuwen, Tessa M; Petersson, Karl Magnus; Hagoort, Peter

    2010-08-10

    In synaesthesia, sensations in a particular modality cause additional experiences in a second, unstimulated modality (e.g., letters elicit colour). Understanding how synaesthesia is mediated in the brain can help to understand normal processes of perceptual awareness and multisensory integration. In several neuroimaging studies, enhanced brain activity for grapheme-colour synaesthesia has been found in ventral-occipital areas that are also involved in real colour processing. Our question was whether the neural correlates of synaesthetically induced colour and real colour experience are truly shared. First, in a free viewing functional magnetic resonance imaging (fMRI) experiment, we located main effects of synaesthesia in left superior parietal lobule and in colour related areas. In the left superior parietal lobe, individual differences between synaesthetes (projector-associator distinction) also influenced brain activity, confirming the importance of the left superior parietal lobe for synaesthesia. Next, we applied a repetition suppression paradigm in fMRI, in which a decrease in the BOLD (blood-oxygenated-level-dependent) response is generally observed for repeated stimuli. We hypothesized that synaesthetically induced colours would lead to a reduction in BOLD response for subsequently presented real colours, if the neural correlates were overlapping. We did find BOLD suppression effects induced by synaesthesia, but not within the colour areas. Because synaesthetically induced colours were not able to suppress BOLD effects for real colour, we conclude that the neural correlates of synaesthetic colour experience and real colour experience are not fully shared. We propose that synaesthetic colour experiences are mediated by higher-order visual pathways that lie beyond the scope of classical, ventral-occipital visual areas. Feedback from these areas, in which the left parietal cortex is likely to play an important role, may induce V4 activation and the percept of synaesthetic colour.

  3. The effect of colour congruency on shape discriminations of novel objects.

    PubMed

    Nicholson, Karen G; Humphrey, G Keith

    2004-01-01

    Although visual object recognition is primarily shape driven, colour assists the recognition of some objects. It is unclear, however, just how colour information is coded with respect to shape in long-term memory and how the availability of colour in the visual image facilitates object recognition. We examined the role of colour in the recognition of novel, 3-D objects by manipulating the congruency of object colour across the study and test phases, using an old/new shape-identification task. In experiment 1, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented in their original colour, rather than in a different colour. In experiments 2 and 3, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented with their original part-colour conjunctions, rather than in different or in reversed part-colour conjunctions. In experiment 4, we found that participants were quite poor at the verbal recall of part-colour conjunctions for correctly identified old objects, presented as grey-scale images at test. In experiment 5, we found that participants were significantly slower at correctly identifying old objects when object colour was incongruent across study and test, than when background colour was incongruent across study and test. The results of these experiments suggest that both shape and colour information are stored as part of the long-term representation of these novel objects. Results are discussed in terms of how colour might be coded with respect to shape in stored object representations.

  4. Post-processing open-source software for the CBCT monitoring of periapical lesions healing following endodontic treatment: technical report of two cases.

    PubMed

    Villoria, Eduardo M; Lenzi, Antônio R; Soares, Rodrigo V; Souki, Bernardo Q; Sigurdsson, Asgeir; Marques, Alexandre P; Fidel, Sandra R

    2017-01-01

    To describe the use of open-source software for the post-processing of CBCT imaging for the assessment of periapical lesions development after endodontic treatment. CBCT scans were retrieved from endodontic records of two patients. Three-dimensional virtual models, voxel counting, volumetric measurement (mm 3 ) and mean intensity of the periapical lesion were performed with ITK-SNAP v. 3.0 software. Three-dimensional models of the lesions were aligned and overlapped through the MeshLab software, which performed an automatic recording of the anatomical structures, based on the best fit. Qualitative and quantitative analyses of the changes in lesions size after treatment were performed with the 3DMeshMetric software. The ITK-SNAP v. 3.0 showed the smaller value corresponding to the voxel count and the volume of the lesion segmented in yellow, indicating reduction in volume of the lesion after the treatment. A higher value of the mean intensity of the segmented image in yellow was also observed, which suggested new bone formation. Colour mapping and "point value" tool allowed the visualization of the reduction of periapical lesions in several regions. Researchers and clinicians in the monitoring of endodontic periapical lesions have the opportunity to use open-source software.

  5. Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.

    PubMed

    Mutimbu, Lawrence; Robles-Kelly, Antonio

    2016-08-31

    This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.

  6. [Color Doppler ultrasonography--a new imaging procedure in maxillofacial surgery].

    PubMed

    Reinert, S; Lentrodt, J

    1991-01-01

    Colour Doppler ultrasonography shows blood flow in real time and colour by combining the features of real time B mode ultrasound and Doppler. At each point in the image the returning signal is interrogated for both amplitude and frequency information. The resulting image shows all non-moving structures in shades of gray and moving structures in shades of red or blue depending on direction and velocity. The technique of colour Doppler ultrasonography and our experiences in 63 examinations are described. The clinical application of this new simple non-invasive method in maxillo-facial surgery is discussed.

  7. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    PubMed Central

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405

  8. New opportunities in planetary geomorphology: an assessment of the capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on The Exomars Trace Gas Orbiter through Image Simulation.

    NASA Astrophysics Data System (ADS)

    Tornabene, Livio Leonardo; Seelos, Frank; Pommerol, Antoine; Thomas, Nick; Caudill, Christy; Conway, Susan J.

    2017-04-01

    The Colour and Stereo Surface Imaging System (CaSSIS) is a full-colour visible to near-infrared (VNIR) bi-directional pushframe stereo camera onboard the ExoMars 2016 Trace Gas Orbiter (TGO). For more details on ExoMars TGO and its payload, please see [4], and for the CaSSIS instrument see [1]. For details on the first Mars Capture Orbit (MCO)-acquired CaSSIS stereo images and preliminary 3D reconstructions from them [5]. CaSSIS will provide full-colour, stereo and repeat imaging spanning different times of day and covering all seasons. Such images will be used to address the following objectives: 1) characterizing possible [surface/subsurface] sources for methane and other trace gases; 2) investigating dynamic surface processes that may contribute to atmospheric gases; and 3) certifying and characterizing candidate landing site safety and hazards (e.g., rocks, slopes, etc.). Here we present a summary, and some highlights, based on the creation and analysis of simulated CaSSIS image cubes [see 2, 3]. We generated simulated images that are spatially (4.6 m/px) and spectrally (4-bands) consistent with CaSSIS from existing Mars Reconnaissance Orbiter (MRO) datasets. Simulated CaSSIS colours were generated from hyperspectral VNIR (S-detector) data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) after the methods of [6], which were then combined with spatially oversampled and resampled 32-bit calibrated I/F images from the Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) [2, 3]. For more of the details on the simulation process and the various products produced please see [2, 3]. Our simulations show that such colour coverage will be particularly valuable towards facilitating and enhancing seasonal process and change detection studies. For example, a simulation image of Gasa crater demonstrates exactly how additional colour context would facilitate gully change detections that can be subtle and difficult to detect in single-band images, or when missed by the HiRISE colour swath. Another result based on our colour analysis includes, excellent separation of ferrous- and ferric-bearing surface materials provided by band ratio colour composite images utilizing the two NIR bands of CaSSIS (3RED, 4NIR). These images will be particularly useful for associating CaSSIS colour units with spectral units defined by orbiting spectrometers (e.g., CRISM), and thereby extend spectral mapping to CaSSIS spatial scales. This will particularly be beneficial for landing sites where it is difficult to achieve continuous colour coverage with HiRISE. Our analysis shows that dune movement can be detected at the scale of CaSSIS, given a long enough baseline. Other results include resolving: 1) larger individual or sets of Recurring Slope Lineae (RSL), 2) small impacts (including ice excavators), and 3) surface changes associated with landers/rovers (NOTE: lander/rovers and their tracks are not resolvable). References: [1] Thomas N. et al. (2016), submitted to SSR. [2] Tornabene L. et al. (2017), submitted to SSR. [3] Tornabene L. et al. (2016) LPSC 47, Abstract #2695. [4] Vago J. et al. (2015) SSR, 49 518-528. [5] Cremonese G. et al. (2017) LPSC 48. [6] Seelos F. et al. (2011) AGU Fall, vol. 23, Abstract #1714. [7] Delamere A. et al. (2010), Icarus, 205, 38-52. Acknowledgements: The authors wish to thank the spacecraft and instrument engineering teams for the successful completion of the instrument. CaSSIS is a project of the University of Bern and funded through the Swiss Space Office via ESA's PRODEX programme. The instrument hardware development was also supported by the Italian Space Agency (ASI) (ASI-INAF agreement no.I/018/12/0), INAF/Astronomical Observatory of Padova, and the Space Research Center (CBK) in Warsaw. Support from SGF (Budapest), the University of Arizona Lunar and Planetary Laboratory, and NASA are also gratefully acknowledged. The lead author also acknowledges personal Canadian-based support from the Canadian Space Agency (CSA), and the NSERC DG programme.

  9. Developments in the recovery of colour in fine art prints using spatial image processing

    NASA Astrophysics Data System (ADS)

    Rizzi, A.; Parraman, C.

    2010-06-01

    Printmakers have at their disposal a wide range of colour printing processes. The majority of artists will utilise high quality materials with the expectation that the best materials and pigments will ensure image permanence. However, as many artists have experienced, this is not always the case. Inks, papers and materials can deteriorate over time. For artists and conservators who need to restore colour or tone to a print could benefit from the assistance of spatial colour enhancement tools. This paper studies two collections from the same edition of fine art prints that were made in 1991. The first edition has been kept in an archive and not exposed to light. The second edition has been framed and exposed to light for about 18 years. Previous experiments using colour enhancement methods [9,10] have involved a series of photographs that had been taken under poor or extreme lighting conditions, fine art works, scanned works. There are a range of colour enhancement methods: Retinex, RSR, ACE, Histogram Equalisation, Auto Levels, which are described in this paper. In this paper we will concentrate on the ACE algorithm and use a range of parameters to process the printed images and describe these results.

  10. Colour Polymorphism Protects Prey Individuals and Populations Against Predation.

    PubMed

    Karpestam, Einat; Merilaita, Sami; Forsman, Anders

    2016-02-23

    Colour pattern polymorphism in animals can influence and be influenced by interactions between predators and prey. However, few studies have examined whether polymorphism is adaptive, and there is no evidence that the co-occurrence of two or more natural prey colour variants can increase survival of populations. Here we show that visual predators that exploit polymorphic prey suffer from reduced performance, and further provide rare evidence in support of the hypothesis that prey colour polymorphism may afford protection against predators for both individuals and populations. This protective effect provides a probable explanation for the longstanding, evolutionary puzzle of the existence of colour polymorphisms. We also propose that this protective effect can provide an adaptive explanation for search image formation in predators rather than search image formation explaining polymorphism.

  11. Colour Polymorphism Protects Prey Individuals and Populations Against Predation

    PubMed Central

    Karpestam, Einat; Merilaita, Sami; Forsman, Anders

    2016-01-01

    Colour pattern polymorphism in animals can influence and be influenced by interactions between predators and prey. However, few studies have examined whether polymorphism is adaptive, and there is no evidence that the co-occurrence of two or more natural prey colour variants can increase survival of populations. Here we show that visual predators that exploit polymorphic prey suffer from reduced performance, and further provide rare evidence in support of the hypothesis that prey colour polymorphism may afford protection against predators for both individuals and populations. This protective effect provides a probable explanation for the longstanding, evolutionary puzzle of the existence of colour polymorphisms. We also propose that this protective effect can provide an adaptive explanation for search image formation in predators rather than search image formation explaining polymorphism. PMID:26902799

  12. Visual processing in Alzheimer's disease: surface detail and colour fail to aid object identification.

    PubMed

    Adlington, Rebecca L; Laws, Keith R; Gale, Tim M

    2009-10-01

    It has been suggested that object recognition in patients with Alzheimer's disease (AD) may be strongly influenced both by image format (e.g. colour vs. line-drawn) and by low-level visual impairments. To examine these notions, we tested basic visual functioning and picture naming in 41 AD patients and 40 healthy elderly controls. Picture naming was examined using 105 images representing a wide range of living and nonliving subcategories (from the Hatfield image test [HIT]: [Adlington, R. A., Laws, K. R., & Gale, T. M. (in press). The Hatfield image test (HIT): A new picture test and norms for experimental and clinical use. Journal of Clinical and Experimental Neuropsychology]), with each item presented in colour, greyscale, or line-drawn formats. Whilst naming for elderly controls improved linearly with the addition of surface detail and colour, AD patients showed no benefit from the addition of either surface information or colour. Additionally, controls showed a significant category by format interaction; however, the same profile did not emerge for AD patients. Finally, AD patients showed widespread and significant impairment on tasks of visual functioning, and low-level visual impairment was predictive of patient naming.

  13. Analysis of Visual Interpretation of Satellite Data

    NASA Astrophysics Data System (ADS)

    Svatonova, H.

    2016-06-01

    Millions of people of all ages and expertise are using satellite and aerial data as an important input for their work in many different fields. Satellite data are also gradually finding a new place in education, especially in the fields of geography and in environmental issues. The article presents the results of an extensive research in the area of visual interpretation of image data carried out in the years 2013 - 2015 in the Czech Republic. The research was aimed at comparing the success rate of the interpretation of satellite data in relation to a) the substrates (to the selected colourfulness, the type of depicted landscape or special elements in the landscape) and b) to selected characteristics of users (expertise, gender, age). The results of the research showed that (1) false colour images have a slightly higher percentage of successful interpretation than natural colour images, (2) colourfulness of an element expected or rehearsed by the user (regardless of the real natural colour) increases the success rate of identifying the element (3) experts are faster in interpreting visual data than non-experts, with the same degree of accuracy of solving the task, and (4) men and women are equally successful in the interpretation of visual image data.

  14. The utility of ultrasound superb microvascular imaging for evaluation of breast tumour vascularity: comparison with colour and power Doppler imaging regarding diagnostic performance.

    PubMed

    Park, A Y; Seo, B K; Woo, O H; Jung, K S; Cho, K R; Park, E K; Cha, S H; Cha, J

    2018-03-01

    To investigate the utility of superb microvascular imaging (SMI) for evaluating the vascularity of breast masses in comparison with colour or power Doppler ultrasound (US) and the effect on diagnostic performance. A total of 191 biopsy-proven masses (99 benign and 92 malignant) in 166 women with greyscale, colour Doppler, power Doppler, and SMI images were enrolled in this retrospective study. Three radiologists analysed the vascular images using a three-factor scoring system to evaluate the number, morphology, and distribution of tumour vessels. They assessed the Breast Imaging-Reporting and Data System categories for greyscale US alone and combinations of greyscale US and each type of vascular US. The Kruskal-Wallis test was performed and the area under the receiver-operating characteristic curve (AUC) measured. On SMI, vascular scores were compared between benign and malignant masses and the optimal cut-off value for the overall score was determined. SMI showed higher vascular scores than colour or power Doppler US and malignant masses had higher scores than benign masses (p<0.001). The diagnostic performance of the combination of greyscale US and SMI was higher than those of greyscale US alone and greyscale and colour or power Doppler US (AUC, 0.815 versus 0.774, 0.789, 0.791; p<0.001). The optimal cut-off value of the overall vascular score was 5 with a sensitivity of 82.3% and a specificity of 65.3% (AUC, 0.808). SMI is superior to colour or power Doppler US for characterising the vascularity in breast masses and improving diagnostic performance. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  15. Correction of motion artefacts and pseudo colour visualization of multispectral light scattering images for optical diagnosis of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula

    2009-10-01

    State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.

  16. Correction of motion artefacts and pseudo colour visualization of multispectral light scattering images for optical diagnosis of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula

    2010-02-01

    State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.

  17. A Method of Character Detection and Segmentation for Highway Guide Signs

    NASA Astrophysics Data System (ADS)

    Xu, Jiawei; Zhang, Chongyang

    2018-01-01

    In this paper, a method of character detection and segmentation for highway signs in China is proposed. It consists of four steps. Firstly, the highway sign area is detectedby colour and geometric features, andthe possible character region is obtained by multi-level projection strategy. Secondly, pseudo target character region is removed by local binary patterns (LBP) feature. Thirdly, convolutional neural network (CNN)is used to classify target regions. Finally, adaptive projection strategies are used to segment characters strings. Experimental results indicate that the proposed method achieves new state-of-the-art results.

  18. Improving the colour match of free tissue transfers to the face with non-cultured autologous cellular spray--a case report on a chin reconstruction.

    PubMed

    Hivelin, M; MacIver, Colin; Heusse, J L; Atlan, M; Lantieri, L

    2012-08-01

    Animal bites can result in extensive avulsion injuries of the face justifying microsurgical replantation attempts. Reconstruction using local tissue harvesting increases the local morbidity while distant tissues can result in colour and skin texture mismatching. Skin grafting of the skin paddle by a split-thickness skin graft is a conventional approach to help overcome this problem. An 18-year-old patient was treated for a chin avulsion after a dog bite injury. The avulsed segment included the whole chin aesthetic unit and one-fifth of the lower lip. The segment was replanted on the inferior labial artery. The replantation failed and a reconstruction with a parascapular free flap was performed. Despite a debulking at 1 month, the aesthetic result had a poor colour match. The technique used to improve this was to de-epithelialise the skin and apply non-cultured autologous epidermal cells (NCAECs) 100 days after the reconstruction. The reconstruction was uneventful. At 3 months follow-up, the patient was able to purse her lips and had regained sensation. After 5 months, the free flap paddle was consistent in colour, pigmentation and texture with the surrounding skin. At 10 months, the patient's only complaint was residual firmness in her scar and flap. The long-term follow-up, over 23 months, confirmed the stability of the results. The use of an NCAEC spray to treat the dyschromia on a parascapular flap used for facial reconstruction is less invasive than split-thickness overgrafting and could extend the use of distant flaps that have been avoided due to poor colour match. Copyright © 2012 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Web-Enabled Distributed Health-Care Framework for Automated Malaria Parasite Classification: an E-Health Approach.

    PubMed

    Maity, Maitreya; Dhane, Dhiraj; Mungle, Tushar; Maiti, A K; Chakraborty, Chandan

    2017-10-26

    Web-enabled e-healthcare system or computer assisted disease diagnosis has a potential to improve the quality and service of conventional healthcare delivery approach. The article describes the design and development of a web-based distributed healthcare management system for medical information and quantitative evaluation of microscopic images using machine learning approach for malaria. In the proposed study, all the health-care centres are connected in a distributed computer network. Each peripheral centre manages its' own health-care service independently and communicates with the central server for remote assistance. The proposed methodology for automated evaluation of parasites includes pre-processing of blood smear microscopic images followed by erythrocytes segmentation. To differentiate between different parasites; a total of 138 quantitative features characterising colour, morphology, and texture are extracted from segmented erythrocytes. An integrated pattern classification framework is designed where four feature selection methods viz. Correlation-based Feature Selection (CFS), Chi-square, Information Gain, and RELIEF are employed with three different classifiers i.e. Naive Bayes', C4.5, and Instance-Based Learning (IB1) individually. Optimal features subset with the best classifier is selected for achieving maximum diagnostic precision. It is seen that the proposed method achieved with 99.2% sensitivity and 99.6% specificity by combining CFS and C4.5 in comparison with other methods. Moreover, the web-based tool is entirely designed using open standards like Java for a web application, ImageJ for image processing, and WEKA for data mining considering its feasibility in rural places with minimal health care facilities.

  20. Multivariate methods to visualise colour-space and colour discrimination data.

    PubMed

    Hastings, Gareth D; Rubin, Alan

    2015-01-01

    Despite most modern colour spaces treating colour as three-dimensional (3-D), colour data is usually not visualised in 3-D (and two-dimensional (2-D) projection-plane segments and multiple 2-D perspective views are used instead). The objectives of this article are firstly, to introduce a truly 3-D percept of colour space using stereo-pairs, secondly to view colour discrimination data using that platform, and thirdly to apply formal statistics and multivariate methods to analyse the data in 3-D. This is the first demonstration of the software that generated stereo-pairs of RGB colour space, as well as of a new computerised procedure that investigated colour discrimination by measuring colour just noticeable differences (JND). An initial pilot study and thorough investigation of instrument repeatability were performed. Thereafter, to demonstrate the capabilities of the software, five colour-normal and one colour-deficient subject were examined using the JND procedure and multivariate methods of data analysis. Scatter plots of responses were meaningfully examined in 3-D and were useful in evaluating multivariate normality as well as identifying outliers. The extent and direction of the difference between each JND response and the stimulus colour point was calculated and appreciated in 3-D. Ellipsoidal surfaces of constant probability density (distribution ellipsoids) were fitted to response data; the volumes of these ellipsoids appeared useful in differentiating the colour-deficient subject from the colour-normals. Hypothesis tests of variances and covariances showed many statistically significant differences between the results of the colour-deficient subject and those of the colour-normals, while far fewer differences were found when comparing within colour-normals. The 3-D visualisation of colour data using stereo-pairs, as well as the statistics and multivariate methods of analysis employed, were found to be unique and useful tools in the representation and study of colour. Many additional studies using these methods along with the JND and other procedures have been identified and will be reported in future publications. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  1. Pitfalls in colour photography of choroidal tumours

    PubMed Central

    Schalenbourg, A; Zografos, L

    2013-01-01

    Colour imaging of fundus tumours has been transformed by the development of digital and confocal scanning laser photography. These advances provide numerous benefits, such as panoramic images, increased contrast, non-contact wide-angle imaging, non-mydriatic photography, and simultaneous angiography. False tumour colour representation can, however, cause serious diagnostic errors. Large choroidal tumours can be totally invisible on angiography. Pseudogrowth can occur because of artefacts caused by different methods of fundus illumination, movement of reference blood vessels, and flattening of Bruch's membrane and sclera when tumour regression occurs. Awareness of these pitfalls should prevent the clinician from misdiagnosing tumours and wrongfully concluding that a tumour has grown. PMID:23238442

  2. Pitfalls in colour photography of choroidal tumours.

    PubMed

    Schalenbourg, A; Zografos, L

    2013-02-01

    Colour imaging of fundus tumours has been transformed by the development of digital and confocal scanning laser photography. These advances provide numerous benefits, such as panoramic images, increased contrast, non-contact wide-angle imaging, non-mydriatic photography, and simultaneous angiography. False tumour colour representation can, however, cause serious diagnostic errors. Large choroidal tumours can be totally invisible on angiography. Pseudogrowth can occur because of artefacts caused by different methods of fundus illumination, movement of reference blood vessels, and flattening of Bruch's membrane and sclera when tumour regression occurs. Awareness of these pitfalls should prevent the clinician from misdiagnosing tumours and wrongfully concluding that a tumour has grown.

  3. Colour-specific differences in attentional deployment for equiluminant pop-out colours: evidence from lateralised potentials.

    PubMed

    Pomerleau, Vincent Jetté; Fortier-Gauthier, Ulysse; Corriveau, Isabelle; Dell'Acqua, Roberto; Jolicœur, Pierre

    2014-03-01

    We investigated how target colour affected behavioural and electrophysiological results in a visual search task. Perceptual and attentional mechanisms were tracked using the N2pc component of the event-related potential and other lateralised components. Four colours (red, green, blue, or yellow) were calibrated for each participant for luminance through heterochromatic flicker photometry and equated to the luminance of grey distracters. Each visual display contained 10 circles, 1 colored and 9 grey, each of which contained an oriented line segment. The task required deploying attention to the colored circle, which was either in the left or right visual hemifield. Three lateralised ERP components relative to the side of the lateral coloured circle were examined: a posterior contralateral positivity (Ppc) prior to N2pc, the N2pc, reflecting the deployment of visual spatial attention, and a temporal and contralateral positivity (Ptc) following N2pc. Red or blue stimuli, as compared to green or yellow, had an earlier N2pc. Both the Ppc and Ptc had higher amplitudes to red stimuli, suggesting particular selectivity for red. The results suggest that attention may be deployed to red and blue more quickly than to other colours and suggests special caution when designing ERP experiments involving stimuli in different colours, even when all colours are equiluminant. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    PubMed

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  5. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness

    PubMed Central

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain. PMID:27023274

  6. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness.

    PubMed

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d') and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.

  7. First Results from the AKARI FU-HYU Mission Program

    NASA Astrophysics Data System (ADS)

    Pearson, C.; Serjeant, S.; Takagi, T.; Jeong, W.-S.; Negrello, M.; Matsuhara, H.; Wada, T.; Oyabu, S.; Lee, H. M.; Im, M.

    2009-12-01

    The AKARI FU-HYU mission program has carried out mid-infrared imaging of several well studied Spitzer fields. This imaging fills in the wavelength coverage lacking from the Spitzer surveys and gives an extremely high scientific return for minimal input for AKARI. We select fields already rich in multi-wavelength data from radio to X-ray wavelengths and present the results from our initial analysis in the GOODS-N field. We utilize the comprehansive multiwavelength coverage in the GOODS-N field to produce a multiwavelength catalogue from infrared to ultraviolet wavelengths including photometric redshifts. Using the FU-HYU catalogue we present colour-colour diagrams that map the passage of PAH features through our observation bands. These colour-colours diagrams are used as tools to extract anomalous colour populations, in particular a population of Silicate Break galaxies from the GOODS-N field.

  8. The colour wheels of art, perception, science and physiology

    NASA Astrophysics Data System (ADS)

    Harkness, Nick

    2006-06-01

    Colour is not the domain of any one discipline be it art, philosophy, psychology or science. Each discipline has its own colour wheel and this presentation examines the origins and philosophies behind the colour circles of Art, Perception, Science and Physiology (after image) with reference to Aristotle, Robert Boyle, Leonardo da Vinci, Goethe, Ewald Hering and Albert Munsell. The paper analyses and discusses the differences between the four colour wheels using the Natural Colour System® notation as the reference for hue (the position of colours within each of the colour wheels). Examination of the colour wheels shows the dominance of blue in the wheels of art, science and physiology particularly at the expense of green. This paper does not consider the three-dimensionality of colour space its goal was to review the hue of a colour with regard to its position on the respective colour wheels.

  9. The Art of Astrophotography

    NASA Astrophysics Data System (ADS)

    Morison, Ian

    2017-02-01

    1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.

  10. A handheld optical device for skin profile measurement

    NASA Astrophysics Data System (ADS)

    Sun, Jiuai; Liu, Xiaojin

    2018-04-01

    This paper describes a portable optical scanning device designed for skin surface measurement on both colour and 3D geometry through a relative easy and cost effective multiple light source photometric stereo method. The validation of colour recovered had been verified through its application on skin lesion segmentation in our early work. This paper focuses on the reconstructed topographic data which are subject to further evaluation and advancement. The evaluation work takes the skin in vitro as an application scenario and compares the experimental result to that obtained by using a commercial product. The experiments show that this handheld device can measure the skin profile significantly closer to that of the ground truth and have the additional function of skin colour recovery.

  11. Clinical and histopathological features of adenomas of the ciliary pigment epithelium.

    PubMed

    Chang, Ying; Wei, Wen Bin; Shi, Ji Tong; Xian, Jun Fang; Yang, Wen Li; Xu, Xiao Lin; Bai, Hai Xia; Li, Bin; Jonas, Jost B

    2016-11-01

    Adenomas of the ciliary pigment epithelium (CPE) are rare benign tumours which have mainly to be differentiated from malignant ciliary body melanomas. Here we report on a consecutive series of patients with CPE adenomas and describe their characteristics. The retrospective hospital-based case series study included all patients who were consecutively operated for CPE adenomas. Of the 110 patients treated for ciliary body tumours, five patients (4.5%) had a CPE adenoma. Mean age was 59.0 ± 9.9 years (range: 46-72 years). Mean tumour apical thickness was 6.6 ± 1.7 mm. Tumour colour was mostly homogenously brown to black, and the tumour surface was smooth. The tumour masses pushed the iris tissue forward without infiltrating iris or anterior chamber angle. Sonography revealed an irregular echogram with sharp lesion borders and signs of blood flow in Color Doppler flow imaging. Ultrasonographic biomicroscopy demonstrated medium-low internal reflectivity and acoustic attenuation. In magnetic resonance imaging (MRI), the tumours as compared to brain were hyperintense on T1-weighted images and hypointense on T2-weighted images. Tumour tissue consisted of cords and nests of pigment epithelium cells separated by septa of vascularized fibrous connective tissue, leading to a pseudo-glandular appearance. The melanin granules in the cytoplasm were large and mostly spherical in shape. In four patients, the tumours were hyperpigmented. Tumour cells were large with round or oval nuclei and clearly detectable nucleoli. These clinical characteristics of CPE adenomas, such as homogenous dark brown colour, smooth surface, iris dislocation and anterior chamber angle narrowing but no iris infiltration, segmental cataract, pigment dispersion, and, as compared to brain tissue, hypointensity and, as compared to extraocular muscles or lacrimal gland, hyperintensity on T2-weighted MRI images, may be helpful for the differentiation from ciliary body malignant melanomas. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  12. Colour flow and motion imaging.

    PubMed

    Evans, D H

    2010-01-01

    Colour flow imaging (CFI) is an ultrasound imaging technique whereby colour-coded maps of tissue velocity are superimposed on grey-scale pulse-echo images of tissue anatomy. The most widespread use of the method is to image the movement of blood through arteries and veins, but it may also be used to image the motion of solid tissue. The production of velocity information is technically more demanding than the production of the anatomical information, partly because the target of interest is often blood, which backscatters significantly less power than solid tissues, and partly because several transmit-receive cycles are necessary for each velocity estimate. This review first describes the various components of basic CFI systems necessary to generate the velocity information and to combine it with anatomical information. It then describes a number of variations on the basic autocorrelation technique, including cross-correlation-based techniques, power Doppler, Doppler tissue imaging, and three-dimensional (3D) Doppler imaging. Finally, a number of limitations of current techniques and some potential solutions are reviewed.

  13. Colour Perception in Ancient World

    NASA Astrophysics Data System (ADS)

    Nesterov, D. I.; Fedorova, M. Yu

    2017-11-01

    How did the human thought form the surrounding color information into the persistent semantic images of a mythological, pseudoscientific and religious nature? The concepts associated with colour perception are suggested. The existence of colour environment does not depend on the human consciousness. The colour culture formation is directly related to the level of the human consciousness development and the possibility to influence the worldview and culture. The colour perception of a person goes through the stages similar to the development of colour vision in a child. Like any development, the colour consciousness has undergone stages of growth and decline, evolution and stagnation. The way of life and difficult conditions for existence made their own adjustments to the development of the human perception of the surrounding world. Wars have been both a powerful engine of progress in all spheres of life and a great destructive force demolishing the already created and preserved heritage. The surrounding world has always been interesting for humans, evoked images and fantasies in the consciousness of ancient people. Unusual and inexplicable natural phenomena spawned numerous legends and myths which was reflected in the ancient art and architecture and, accordingly, in a certain manifestation of colour in the human society. The colour perception of the ancient man, his pragmatic, utilitarian attitude to colour is considered as well as the influence of dependence on external conditions of existence and their reflection in the colour culture of antiquity. “Natural Science” conducts research in the field of the colour nature and their authorial interpretation of the Hellenic period. Several authorial concepts of the ancient world have been considered.

  14. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  15. The development of vector based 2.5D print methods for a painting machine

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna

    2013-02-01

    Through recent trends in the application of digitally printed decorative finishes to products, CAD, 3D additive layer manufacturing and research in material perception, [1, 2] there is a growing interest in the accurate rendering of materials and tangible displays. Although current advances in colour management and inkjet printing has meant that users can take for granted high-quality colour and resolution in their printed images, digital methods for transferring a photographic coloured image from screen to paper is constrained by pixel count, file size, colorimetric conversion between colour spaces and the gamut limits of input and output devices. This paper considers new approaches to applying alternative colour palettes by using a vector-based approach through the application of paint mixtures, towards what could be described as a 2.5D printing method. The objective is to not apply an image to a textured surface, but where texture and colour are integral to the mark, that like a brush, delineates the contours in the image. The paper describes the difference between the way inks and paints are mixed and applied. When transcribing the fluid appearance of a brush stroke, there is a difference between a halftone printed mark and a painted mark. The issue of surface quality is significant to subjective qualities when studying the appearance of ink or paint on paper. The paper provides examples of a range of vector marks that are then transcribed into brush stokes by the painting machine.

  16. Pre-Processes for Urban Areas Detection in SAR Images

    NASA Astrophysics Data System (ADS)

    Altay Açar, S.; Bayır, Ş.

    2017-11-01

    In this study, pre-processes for urban areas detection in synthetic aperture radar (SAR) images are examined. These pre-processes are image smoothing, thresholding and white coloured regions determination. Image smoothing is carried out to remove noises then thresholding is applied to obtain binary image. Finally, candidate urban areas are detected by using white coloured regions determination. All pre-processes are applied by utilizing the developed software. Two different SAR images which are acquired by TerraSAR-X are used in experimental study. Obtained results are shown visually.

  17. Ocular fundus auto-fluorescence observations at different wavelengths in patients with age-related macular degeneration and diabetic retinopathy.

    PubMed

    Hammer, Martin; Königsdörffer, Ekkehart; Liebermann, Christiane; Framme, Carsten; Schuch, Günter; Schweitzer, Dietrich; Strobel, Jürgen

    2008-01-01

    Post-translational protein modification by lipid peroxidation products or glycation is a feature of aging as well as pathologic processes in postmitotic cells at the ocular fundus exposed to an oxidative environment. The accumulation of modified proteins such as those found in lipofuscin and advanced glycation end products (AGEs) contribute greatly to the fundus auto-fluorescence. The distinct fluorescence spectra of lipofuscin and AGE enable their differentiation in multispectral fundus fluorescence imaging. A dual-centre consecutive case series of 78 pseudo-phacic patients is reported. Digital colour fundus photographs as well as auto-fluorescence images were taken from 33 patients with age related macular degeneration (AMD), 13 patients with diabetic retinopathy (RD), or from 32 cases without pathologic findings (controls). Fluorescence was excited at 475-515 nm or 476-604 nm and recorded in the emission bands 530-675 nm or 675-715 nm, respectively. Fluorescence images excited at 475-515 nm were taken by a colour CCD-camera (colour-fluorescence imaging) enabling the separate recording of green and red fluorescence. The ratio of green versus red fluorescence was calculated within a representative region of each image. The 530-675 nm auto-fluorescence in AMD patients was dominated by the red emission (green vs. red ratio, g/r = 0.861). In comparison, the fluorescence of the diabetics was green-shifted (g/r = 0.946; controls: g/r = 0.869). Atrophic areas (geographic atrophy, laser scars) showed massive hypo-fluorescence in both emission bands. Hyper-fluorescent drusen and exudates, unobtrusive in the colour fundus images as well as in the fluorescence images with emission >667 nm, showed an impressive green-shift in the colour-fluorescence image. Lipofuscin is the dominant fluorophore at long wavelengths (>675 nm or red channel of the colour fluorescence image). In the green spectral region, we found an additional emission of collagen and elastin (optic disc, sclera) as well as deposits in drusen and exudates. The green shift of the auto-fluorescence in RD may be a hint of increased AGE concentrations.

  18. The development of a colour liquid crystal display spatial light modulator and applications in polychromatic optical data processing

    NASA Astrophysics Data System (ADS)

    Aiken, John Charles

    The development of a colour Spatial Light Modulator (SLM) and its application to optical information processing is described. Whilst monochrome technology has been established for many years, this is not the case for colour where commercial systems are unavailable. A main aspect of this study is therefore, how the use of colour can add an additional dimension to optical information processing. A well established route to monochrome system development has been the use of (black and white) liquid crystal televisions (LCTV) as SLM, providing useful performance at a low-cost. This study is based on the unique use of a colour display removed from a LCTV and operated as a colour SLM. A significant development has been the replacement of the original TV electronics operating the display with enhanced drive electronics specially developed for this application. Through a computer interface colour images from a drawing package or video camera can now be readily displayed on the LCD as input to an optical system. A detailed evaluation of the colour LCD optical properties, indicates that the new drive electronics have considerably improved the operation of the display for use as a colour SLM. Applications are described employing the use of colour in Fourier plane filtering, image correlation and speckle metrology. The SLM (and optical system) developed demonstrates, how the addition of colour has greatly enhanced its capabilities to implement principles of optical data processing, conventionally performed monochromatically. The hybrid combination employed, combining colour optical data processing with electronic techniques has resulted in a capable development system. Further development of the system using current colour LCDs and the move towards a portable system, is considered in the study conclusion.

  19. Evaluating the masticatory function after mandibulectomy with colour-changing chewing gum.

    PubMed

    Shibuya, Y; Ishida, S; Hasegawa, T; Kobayashi, M; Nibu, K; Komori, T

    2013-07-01

    The aim of this study was to clarify the usefulness of colour-changing gum in evaluating masticatory performance after mandibulectomy. Thirty-nine patients who underwent mandibulectomy between 1982 and 2010 at Kobe University Hospital were recruited in this study. There were 21 male and 18 female subjects with a mean age of 64·7 years (range: 12-89 years) at the time of surgery. The participants included six patients who underwent marginal mandibulectomy, 21 patients who underwent segmental mandibulectomy and 12 patients who underwent hemimandibulectomy. The masticatory function was evaluated using colour-changing chewing gum, gummy jelly and a modified Sato's questionnaire. In all cases, the data were obtained more than 3 months after completing the patient's final prosthesis. The colour-changing gum scores correlated with both the gummy jelly scores (r = 0·634, P < 0·001) and the total scores of the modified Sato's questionnaire (r = 0·537, P < 0·001). In conclusion, colour-changing gum is a useful item for evaluating masticatory performance after mandibulectomy. © 2013 John Wiley & Sons Ltd.

  20. Narrow-band filters for ocean colour imager

    NASA Astrophysics Data System (ADS)

    Krol, Hélène; Chazallet, Frédéric; Archer, Julien; Kirchgessner, Laurent; Torricini, Didier; Grèzes-Besset, Catherine

    2017-11-01

    During the last few years, the evolution of deposition technologies of optical thin films coatings and associated in-situ monitoring methods enables us today to successfully answer the increasingly request of space systems for Earth observation. Geostationary satellite COMS-1 (Communication, Ocean, Meteorological Satellite-1) of Astrium has the role of ensuring meteorological observation as well as monitoring of the oceans. It is equipped with a colour imager to observe the marine ecosystem through 8 bands in the visible spectrum with a ground resolution of 500m. For that, this very high technology instrument is constituted with a filters wheel in front of the oceanic colour imager with 8 narrow band filters carried out and qualified by Cilas.

  1. Mental Imagery and Synaesthesia: Is Synaesthesia from Internally-Generated Stimuli Possible?

    ERIC Educational Resources Information Center

    Spiller, Mary Jane; Jansari, Ashok S.

    2008-01-01

    Previous studies provide empirical support for the reported colour experience in grapheme-colour synaesthesia by measuring the synaesthetic experience from an externally presented grapheme. The current study explored the synaesthetic experience resulting from a visual mental image of a grapheme. Grapheme-colour synaesthetes (N=6) and matched…

  2. Bad Colourmaps Can Hide Big Structures

    NASA Astrophysics Data System (ADS)

    Kovesi, Peter

    2014-05-01

    Colourmaps are often selected with little awareness of the perceptual distortions they might introduce. A colourmap can be thought of as a line or curve drawn through a three dimensional colour space. Individual data values are mapped to positions along this line which, in turn, allows them to be mapped to a colour. For a colourmap to be effective it is important that the perceptual contrast that occurs as one moves along the line in the colour space is close to uniform. Many colourmaps are designed as piecewise linear paths through RGB space. This is a poor colour space to use because it is not perceptually uniform. Accordingly many colourmaps supplied by vendors have uneven perceptual contrast over their range. They may include points of locally high colour contrast leading you to think there might be some anomaly in your data when there is none. Conversely, colourmaps may also have flat spots of low perceptual contrast that prevent you from seeing features in your data. In some cases it is possible for structures having a magnitude of 10% of the full data range to be completely hidden by a flat spot in the colourmap. The deficiencies of many colourmaps can be revealed using a simple test image consisting of a high frequency sine wave superimposed on a ramp function. The amplitude of the sine wave is modulated from a maximum value at the top of the image to zero at the bottom. Ideally the sine wave should be uniformly visible across the image at all points on the ramp. For many colourmaps this will not be the case. At the very bottom of the image, where the sine wave amplitude has been modulated to 0, we just have a linear ramp which simply reproduces the colourmap. Given that the underlying data is a featureless ramp the colourmap should not induce the perception of any features across the bottom of the test image. Good colourmaps are difficult to design. A greyscale colourmap is generally a safe choice but is not always what is desired. For non-greyscale colourmaps the perceptual colour contrast between adjacent entries of the map should be constant across the whole colourmap. In addition, and more importantly, the colour lightness change between successive entries in the colourmap should also be constant. These conditions, if observed, constrain the design of colourmaps considerably, and they exclude the construction of rainbow style colourmaps. It is shown that good colourmaps can be formed from smooth curves constructed in a perceptually uniform colour space such as CIELAB. Colour lightness values should be monotonically increasing at a constant rate while at the same time the colourmap curve should stay close to the boundary of the colour space gamut to ensure that the colours are vivid.

  3. Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors

    PubMed Central

    Palacios, José Manuel; Sagüés, Carlos; Montijano, Eduardo; Llorente, Sergio

    2013-01-01

    In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The algorithm also uses colour and semantic information to accurately identify any number of hands present in the image. Ten different static hand gestures are recognised, including all different combinations of spread fingers. Additionally, movements of an open hand are followed and 6 dynamic gestures are identified. The main advantage of our approach is the freedom of the user's hands to be at any position of the image without the need of wearing any specific clothing or additional devices. Besides, the whole method can be executed without any initial training or calibration. Experiments carried out with different users and in different environments prove the accuracy and robustness of the method which, additionally, can be run in real-time. PMID:24018953

  4. Investigation of two methods to quantify noise in digital images based on the perception of the human eye

    NASA Astrophysics Data System (ADS)

    Kleinmann, Johanna; Wueller, Dietmar

    2007-01-01

    Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.

  5. Could digital imaging be an alternative for digital colorimeters?

    PubMed

    Caglar, Alper; Yamanel, Kivanc; Gulsahi, Kamran; Bagis, Bora; Ozcan, Mutlu

    2010-12-01

    This study evaluated the colour parameters of composite and ceramic shade guides determined using a colorimeter and digital imaging method with illuminants at different colour temperatures. Two different resin composite shade guides, namely Charisma (Heraeus Kulzer) and Premise (Kerr Corporation), and two different ceramic shade guides, Vita Lumin Vacuum (VITA Zahnfabrik) and Noritake (Noritake Co.), were evaluated at three different colour temperatures (2,700 K, 2,700-6,500 K, and 6500 K) of illuminants. Ten shade tabs were selected (A1, A2, A3, A3,5, A4, B1, B2, B3, C2 and C3) from each shade guide. CIE Lab values were obtained using digital imaging and a colorimeter (ShadeEye NCC Dental Chroma Meter, Shofu Inc.). The data were analysed using two-way ANOVA, and Pearson's correlation. While mean L* values of both composite and ceramic shade guides were not affected from the colour temperature, L* values obtained with the colorimeter showed significantly lower values than those of the digital imaging (p < 0.01). At combined 2,700-6500 K colour temperature, the means of a* values obtained from colorimeter and digital imaging did not show significant differences (p > 0.05). For both composite and ceramic shade guides, L* and b* values obtained from colorimeter and digital imaging method presented a high level of correlation. High-level correlations were also acquired for a* values in all shade guides except for the Charisma composite shade guide. Digital imaging method could be an alternative for the colorimeters unless the proper object-camera distance, digital camera settings and suitable illumination conditions could be supplied. However, variations in shade guides, especially for composites, may affect the correlation.

  6. Spatial harmonics and pattern specification in early Drosophila development. Part II. The four colour wheels model.

    PubMed

    Kauffman, S A; Goodwin, B C

    1990-06-07

    We review the evidence presented in Part I showing that transcripts and protein products of maternal, gap, pair-rule, and segment polarity genes exhibit increasingly complex, multipeaked longitudinal waveforms in the early Drosophila embryo. The central problem we address in Part II is the use the embryo makes of these wave forms to specify longitudinal pattern. Based on the fact that mutants of many of these genes generate deletions and mirror symmetrical duplications of pattern elements on length scales ranging from about half the egg to within segments, we propose that position is specified by measuring a "phase angle" by use of the ratios of two or more variables. Pictorially, such a phase angle can be thought of as a colour on a colour wheel. Any such model contains a phaseless singularity where all or many phases, or colours, come together. We suppose as well that positional values sufficiently close to the singularity are meaningless, hence a "dead zone". Duplications and deletions are accounted for by deformation of the cycle of morphogen values occurring along the antero-posterior axis. If the cycle of values surrounds the singularity and lies outside the dead zone, pattern is normal. If the curve transects the dead zone, pattern elements are deleted. If the curve lies entirely on one side of the singularity, pattern elements are deleted and others are duplicated with mirror symmetry. The existence of different wavelength transcript patterns in maternal, gap, pair-rule, and segment polarity genes and the roles of those same genes in generating deletions and mirror symmetrical duplications on a variety of length scales lead us to propose that position is measured simultaneously on at least four colour wheels, which cycle different numbers of times along the anterior-posterior axis. These yield progressively finer grained positional information. Normal pattern specification requires a unique angle, outside of the dead zone, from each of the four wheels. Deformations of the cycle of gene product concentrations yield the deletions and mirror symmetric duplications observed in the mutants discussed. The alternative familiar hypothesis that longitudinal position is specified in an "on" "off" combinatorial code does not readily account for the duplication deletion phenomena.

  7. Reconstruction of the absorption spectrum of an object spot from the colour values of the corresponding pixel(s) in its digital image: the challenge of algal colours.

    PubMed

    Coltelli, Primo; Barsanti, Laura; Evangelista, Valter; Frassanito, Anna Maria; Gualtieri, Paolo

    2016-12-01

    A novel procedure for deriving the absorption spectrum of an object spot from the colour values of the corresponding pixel(s) in its image is presented. Any digital image acquired by a microscope can be used; typical applications are the analysis of cellular/subcellular metabolic processes under physiological conditions and in response to environmental stressors (e.g. heavy metals), and the measurement of chromophore composition, distribution and concentration in cells. In this paper, we challenged the procedure with images of algae, acquired by means of a CCD camera mounted onto a microscope. The many colours algae display result from the combinations of chromophores whose spectroscopic information is limited to organic solvents extracts that suffers from displacements, amplifications, and contraction/dilatation respect to spectra recorded inside the cell. Hence, preliminary processing is necessary, which consists of in vivo measurement of the absorption spectra of photosynthetic compartments of algal cells and determination of spectra of the single chromophores inside the cell. The final step of the procedure consists in the reconstruction of the absorption spectrum of the cell spot from the colour values of the corresponding pixel(s) in its digital image by minimization of a system of transcendental equations based on the absorption spectra of the chromophores under physiological conditions. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  8. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  9. Colour blindness does not preclude fame as an artist: celebrated Australian artist Clifton Pugh was a protanope.

    PubMed

    Cole, Barry L; Harris, Ross W

    2009-09-01

    The aim was to make a posthumous diagnosis of the abnormal colour vision of the acclaimed artist Clifton Pugh and to analyse his use of colours to discern the strategies he used to overcome his limited colour perception. A pedigree of Pugh's family was constructed by searching public records. Pugh had no daughters but he had two older brothers, one of whom was still living. We tested the colour vision of this brother and one of his daughters and one of his grandsons. Three children of the other brother were questioned about the colour vision of their father and one daughter was tested for heterozygosity with the Medmont C100. Four observers with normal colour vision categorised the colours used by Pugh in a sample of 59 of his paintings. Protanopic transformations of some of these paintings were made using the Vischeck algorithms to gain an appreciation of how Pugh saw his own paintings. The validity of the transformations was tested by asking a protanope to report if the transformations looked the same as the normal colour images of 10 of Pugh's paintings. Pugh's brother was a severe protan. His daughter showed Schmidt's sign and was a carrier of the protan gene and her son was a protanope. The oldest brother was reported as having normal colour vision. Therefore, it is almost certain that Clifton Pugh was a protanope. Pugh used all colours in his paintings but preferred to structure them on brown, black and blue or, for high key paintings, on cream or flesh colours. He used greens and purples sparingly. The protanopic Vischeck transformations did not always look the same as the normal colour image for the protanope observer. A severe colour vision deficiency does not preclude success as a painter. It is a handicap but there are strategies artists can use to overcome it.

  10. Phase shifting white light interferometry using colour CCD for optical metrology and bio-imaging applications

    NASA Astrophysics Data System (ADS)

    Upputuri, Paul Kumar; Pramanik, Manojit

    2018-02-01

    Phase shifting white light interferometry (PSWLI) has been widely used for optical metrology applications because of their precision, reliability, and versatility. White light interferometry using monochrome CCD makes the measurement process slow for metrology applications. WLI integrated with Red-Green-Blue (RGB) CCD camera is finding imaging applications in the fields optical metrology and bio-imaging. Wavelength dependent refractive index profiles of biological samples were computed from colour white light interferograms. In recent years, whole-filed refractive index profiles of red blood cells (RBCs), onion skin, fish cornea, etc. were measured from RGB interferograms. In this paper, we discuss the bio-imaging applications of colour CCD based white light interferometry. The approach makes the measurement faster, easier, cost-effective, and even dynamic by using single fringe analysis methods, for industrial applications.

  11. Follow-up of solar lentigo depigmentation with a retinaldehyde-based cream by clinical evaluation and calibrated colour imaging.

    PubMed

    Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G

    2015-05-01

    To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. It is all in the face: carotenoid skin coloration loses attractiveness outside the face.

    PubMed

    Lefevre, C E; Ewbank, M P; Calder, A J; von dem Hagen, E; Perrett, D I

    2013-01-01

    Recently, the importance of skin colour for facial attractiveness has been recognized. In particular, dietary carotenoid-induced skin colour has been proposed as a signal of health and therefore attractiveness. While perceptual results are highly consistent, it is currently not clear whether carotenoid skin colour is preferred because it poses a cue to current health condition in humans or whether it is simply seen as a more aesthetically pleasing colour, independently of skin-specific signalling properties. Here, we tested this question by comparing attractiveness ratings of faces to corresponding ratings of meaningless scrambled face images matching the colours and contrasts found in the face. We produced sets of face and non-face stimuli with either healthy (high-carotenoid coloration) or unhealthy (low-carotenoid coloration) colour and asked participants for attractiveness ratings. Results showed that, while for faces increased carotenoid coloration significantly improved attractiveness, there was no equivalent effect on perception of scrambled images. These findings are consistent with a specific signalling system of current condition through skin coloration in humans and indicate that preferences are not caused by sensory biases in observers.

  13. Screening for diabetic retinopathy using digital colour photography and oral fluorescein angiography.

    PubMed

    Newsom, R; Moate, B; Casswell, T

    2000-08-01

    To evaluate digital colour photography and oral fluorescein angiography (OFA) for diabetic retinopathy screening. Thirty-seven patients were selected from either a diabetic retinopathy screening or a medical retina clinic. Three 45 degrees colour digital images and a single macula 45 degrees OFA image were taken from each eye. Standard seven-field stereo photography with ETDRS grading was used as a gold standard for data comparison. The images were assessed by two graders and the results of each method compared using the McNemar test. Five eyes had no diabetic retinopathy, 50 had background diabetic retinopathy, 3 had pre-proliferative diabetic retinopathy, 11 had proliferative disease and 3 had quiescent posttreatment disease. Clinically significant macular oedema was present in 25 eyes and absent in 48. For grading diabetic retinopathy digital colour photography produced a sensitivity of 0.87 (specificity 0.83); OFA produced a sensitivity of 0.87 (specificity 0.80) (p = 0.1). For the detection of diabetic maculopathy, the sensitivity of digital colour photography was 0.48 (specificity of 0.95) and for OFA was 0.87 (specificity 0.87) (p < 0.01). This pilot study has shown that both digital colour photography and OFA compare well with conventional methods for diabetic retinopathy screening. The results encourage the further evaluation of OFA in the screening for diabetic maculopathy.

  14. A Bayesian Model of the Memory Colour Effect.

    PubMed

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.

  15. A Bayesian Model of the Memory Colour Effect

    PubMed Central

    Olkkonen, Maria; Gegenfurtner, Karl R.

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874

  16. Effects of hair sprays on colour perception: a hyperspectral imaging approach to shine and chroma on heads.

    PubMed

    Puccetti, G; Thompson, W

    2017-04-01

    Hair sprays apply fixative ingredients to provide hold to a hair style as well as weather resistance and optical properties such as shine. Generally, sprays distribute fine particles containing polymeric ingredients to form a thin film on the surface of hair. Different hair types require different strengths of the formed deposit on the hair surface. The present study shows how sprays also alter the visibility of the hair colour by altering the surface topology of the hair fibres. Hyperspectral imaging is used to map spectral characteristics of hair on mannequins and panelists over the curvature of heads. Spectral and spatial characteristics are measured before and after hair spray applications. The hair surface is imaged by SEM to visualize the degree of cuticle coverage. Finally, the perception of hair colour was evaluated on red-coloured mannequins by consumer questionnaire. Hair sprays deposit different degrees of fixatives, which lead to a progressive leveling of the cuticle natural tilt angle with respect to the fibre axis. As a result, shine is progressively shifting towards the region of hair colour visibility and decreases the perceived colour of hair seen by consumers. Lighter sprays show thinner film formation on the hair surface and less of a shine shift than strong hold hair sprays. Hair sprays are generally employed for hair style hold and weather resistance and considered without effect on hair colour. Our approach shows that spray-deposited films can affect colour perception by altering the microstructure of the hair surface. Thin films deposited on the hair fibre surface can partially fill gaps between cuticles, which reduces the cuticle natural angle. This partial erasure results in a angle shift of the shine regions towards the angle of internal reflection, thus decreasing the perceived hair colour regions as experienced by a group of consumers. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  17. Principal components colour display of ERTS imagery

    NASA Technical Reports Server (NTRS)

    Taylor, M. M.

    1974-01-01

    In the technique presented, colours are not derived from single bands, but rather from independent linear combinations of the bands. Using a simple model of the processing done by the visual system, three informationally independent linear combinations of the four ERTS bands are mapped onto the three visual colour dimensions of brightness, redness-greenness and blueness-yellowness. The technique permits user-specific transformations which enhance particular features, but this is not usually needed, since a single transformation provides a picture which conveys much of the information implicit in the ERTS data. Examples of experimental vector images with matched individual band images are shown.

  18. Spaceborne Ocean Intelligence Network: SOIN - Fiscal Year 08/09 Year-End Summary

    DTIC Science & Technology

    2009-09-01

    de planification d’acquisition d’images de satellites commerciaux]) et le BT 4 (Entités océaniques d’après les images RSO). Le BT 2 est terminé à...colour assignment (r, g , or b). If a band is assigned to two colours, then two of r, g , or b are displayed next to the DN. If a band is assigned to...a DN is displayed for each r, g and b colour assignment, even if they are the same band. • Sigma-Nought Only works for RADARSAT-1 and RADARSAT-2

  19. [The development of the skin-optical perception of color and images in blind schoolchildren on an "internal visual screen"].

    PubMed

    Mizrakhi, V M; Protsiuk, R G

    2000-03-01

    In profound impairement of vision the function of colour and seen objects perception is absent, with the person being unable to orient himself in space. The uncovered sensory sensations of colour allowed their use in training the blind in recognizing the colour of paper, fabric, etc. Further study in those having become blind will, we believe, help in finding eligible people and relevant approaches toward educating the blind, which will make for development of the trainee's ability to recognize images on the "inner visual screen".

  20. Filling schemes of silver dots inkjet-printed on pixelated nanostructured surfaces

    NASA Astrophysics Data System (ADS)

    Alan, Sheida; Jiang, Hao; Shahbazbegian, Haleh; Patel, Jasbir N.; Kaminska, Bozena

    2017-03-01

    Recently, our group demonstrated an inkjet-based technique to enable high-throughput, versatile and full-colour printing of structural colours on generic pixelated nanostructures, termed as molded ink on nanostructured surfaces. The printed colours are controlled by the area of printed silver on the pixelated red, green and blue polymer nanostructure arrays. This paper investigates the behaviour of jetted silver ink droplets on nanostructured surfaces and the microscale dot patterns implemented during printing process, for achieving accurate and consistent colours in the printed images. The surface wettability and the schemes of filling silver dots inside the subpixels are crucial to the quality of printed images. Several related concepts and definitions are introduced, such as filling ratio, full dots per subpixel (DPSP), number of printable colours, colour leaking and dot merging. In our experiments, we first chemically modified the surface to control the wettability and dot size. From each type of modified surface, various filling schemes were experimented and the printed results were evaluated with comprehensive considerations on the number of printable colours and the negative effects of colour leaking and dot merging. Rational selection of the best filling scheme resulted in a 2-line filling scheme using 20 μm dot spacing and line spacing capable of printing 9261 different colours with 121 pixel per inch display resolution, on low-wettability surface. This study is of vital importance for scaling up the printing technique in industrial applications and provides meaningful insights for inkjet-printing on nanostructures.

  1. Influence of Texture and Colour in Breast TMA Classification

    PubMed Central

    Fernández-Carrobles, M. Milagro; Bueno, Gloria; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial; González-López, Lucía

    2015-01-01

    Breast cancer diagnosis is still done by observation of biopsies under the microscope. The development of automated methods for breast TMA classification would reduce diagnostic time. This paper is a step towards the solution for this problem and shows a complete study of breast TMA classification based on colour models and texture descriptors. The TMA images were divided into four classes: i) benign stromal tissue with cellularity, ii) adipose tissue, iii) benign and benign anomalous structures, and iv) ductal and lobular carcinomas. A relevant set of features was obtained on eight different colour models from first and second order Haralick statistical descriptors obtained from the intensity image, Fourier, Wavelets, Multiresolution Gabor, M-LBP and textons descriptors. Furthermore, four types of classification experiments were performed using six different classifiers: (1) classification per colour model individually, (2) classification by combination of colour models, (3) classification by combination of colour models and descriptors, and (4) classification by combination of colour models and descriptors with a previous feature set reduction. The best result shows an average of 99.05% accuracy and 98.34% positive predictive value. These results have been obtained by means of a bagging tree classifier with combination of six colour models and the use of 1719 non-correlated (correlation threshold of 97%) textural features based on Statistical, M-LBP, Gabor and Spatial textons descriptors. PMID:26513238

  2. Coupled dictionary learning for joint MR image restoration and segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xuesong; Fan, Yong

    2018-03-01

    To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.

  3. Remote sensing image segmentation based on Hadoop cloud platform

    NASA Astrophysics Data System (ADS)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  4. Fast polarimetric dehazing method for visibility enhancement in HSI colour space

    NASA Astrophysics Data System (ADS)

    Zhang, Wenfei; Liang, Jian; Ren, Liyong; Ju, Haijuan; Bai, Zhaofeng; Wu, Zhaoxin

    2017-09-01

    Image haze removal has attracted much attention in optics and computer vision fields in recent years due to its wide applications. In particular, the fast and real-time dehazing methods are of significance. In this paper, we propose a fast dehazing method in hue, saturation and intensity colour space based on the polarimetric imaging technique. We implement the polarimetric dehazing method in the intensity channel, and the colour distortion of the image is corrected using the white patch retinex method. This method not only reserves the detailed information restoration capacity, but also improves the efficiency of the polarimetric dehazing method. Comparison studies with state of the art methods demonstrate that the proposed method obtains equal or better quality results and moreover the implementation is much faster. The proposed method is promising in real-time image haze removal and video haze removal applications.

  5. Reconstructive colour X-ray diffraction imaging--a novel TEDDI imaging method.

    PubMed

    Lazzari, Olivier; Jacques, Simon; Sochi, Taha; Barnes, Paul

    2009-09-01

    Tomographic Energy-Dispersive Diffraction Imaging (TEDDI) enables a unique non-destructive mapping of the interior of bulk objects, exploiting the full range of X-ray signals (diffraction, fluorescence, scattering, background) recorded. By analogy to optical imaging, a wide variety of features (structure, composition, orientation, strain) dispersed in X-ray wavelengths can be extracted and colour-coded to aid interpretation. The ultimate aim of this approach is to realise real-time high-definition colour X-ray diffraction imaging, on the timescales of seconds, so that one will be able to 'look inside' optically opaque apparatus and unravel the space/time-evolution of the materials chemistry taking place. This will impact strongly on many fields of science but there are currently two barriers to this goal: speed of data acquisition (a 2D scan currently takes minutes to hours) and loss of image definition through spatial distortion of the X-ray sampling volume. Here we present a data-collection scenario and reconstruction routine which overcomes the latter barrier and which has been successfully applied to a phantom test object and to real materials systems such as a carbonating cement block. These procedures are immediately transferable to the promising technology of multi-energy-dispersive-detector-arrays which are planned to deliver the other breakthrough, that of one-two orders of magnitude improvement in data acquisition rates, that will be needed to realise real-time high-definition colour X-ray diffraction imaging.

  6. Analysis and classification of commercial ham slice images using directional fractal dimension features.

    PubMed

    Mendoza, Fernando; Valous, Nektarios A; Allen, Paul; Kenny, Tony A; Ward, Paddy; Sun, Da-Wen

    2009-02-01

    This paper presents a novel and non-destructive approach to the appearance characterization and classification of commercial pork, turkey and chicken ham slices. Ham slice images were modelled using directional fractal (DF(0°;45°;90°;135°)) dimensions and a minimum distance classifier was adopted to perform the classification task. Also, the role of different colour spaces and the resolution level of the images on DF analysis were investigated. This approach was applied to 480 wafer thin ham slices from four types of hams (120 slices per type): i.e., pork (cooked and smoked), turkey (smoked) and chicken (roasted). DF features were extracted from digitalized intensity images in greyscale, and R, G, B, L(∗), a(∗), b(∗), H, S, and V colour components for three image resolution levels (100%, 50%, and 25%). Simulation results show that in spite of the complexity and high variability in colour and texture appearance, the modelling of ham slice images with DF dimensions allows the capture of differentiating textural features between the four commercial ham types. Independent DF features entail better discrimination than that using the average of four directions. However, DF dimensions reveal a high sensitivity to colour channel, orientation and image resolution for the fractal analysis. The classification accuracy using six DF dimension features (a(90°)(∗),a(135°)(∗),H(0°),H(45°),S(0°),H(90°)) was 93.9% for training data and 82.2% for testing data.

  7. Oil droplets of bird eyes: microlenses acting as spectral filters

    PubMed Central

    Stavenga, Doekele G.; Wilts, Bodo D.

    2014-01-01

    An important component of the cone photoreceptors of bird eyes is the oil droplets located in front of the visual-pigment-containing outer segments. The droplets vary in colour and are transparent, clear, pale or rather intensely yellow or red owing to various concentrations of carotenoid pigments. Quantitative modelling of the filter characteristics using known carotenoid pigment spectra indicates that the pigments’ absorption spectra are modified by the high concentrations that are present in the yellow and red droplets. The high carotenoid concentrations not only cause strong spectral filtering but also a distinctly increased refractive index at longer wavelengths. The oil droplets therefore act as powerful spherical microlenses, effectively channelling the spectrally filtered light into the photoreceptor's outer segment, possibly thereby compensating for the light loss caused by the spectral filtering. The spectral filtering causes narrow-band photoreceptor spectral sensitivities, which are well suited for spectral discrimination, especially in birds that have feathers coloured by carotenoid pigments. PMID:24395968

  8. The selective preservation of colour naming in semantic dementia.

    PubMed

    Robinson, G; Cipolotti, L

    2001-01-01

    This paper documents a series of seven patients with semantic dementia who showed a selective preservation in colour naming. This was in the context of a pervasive impairment in naming nouns across a wide range of other semantic categories. To our knowledge, this is the first series of patients with semantic dementia documenting a selective preservation of colour naming. These findings are discussed in the light of current theoretical accounts of category-specific effects and the possible contribution of imageability to this selective preservation of colours.

  9. European dental students' opinions about visual and digital tooth colour determination systems.

    PubMed

    Dozic, Alma; Kharbanda, Aron K; Kamell, Hassib; Brand, Henk S

    2011-12-01

    The aim of the study was to investigate students' opinion about visual and digital tooth colour determination education at different European dental schools. A cross-sectional web-based survey was created, containing nine dichotomous, multiple choice and 5-point Likert scale questions. The questionnaire was distributed amongst students of 40 European dental schools. Seven hundred and ninety-nine completed questionnaires from students of 15 dental schools were analysed statistically. Vitapan Classical and Vitapan 3D-Master are the most frequently used visual determination systems at European dental schools. Most students responded with "neutral" regarding whether they find it easy to identify the colour of teeth with a visual determination system (range 2.8-3.6). A minority of the dental students had received education in digital imaging systems (2-47%). The Easyshade was the most frequently mentioned digital system. The majority of the students who did not receive education on digital systems would like to see this topic added to the curriculum (77-100%). The dental students who had worked with both methods found it significantly easier to determine tooth colour with a digital system than with a visual system (mean score 3.5 ± 0.8 vs. 3.0 ± 0.8). Tooth colour determination programmes show a considerable variation across European dental schools. Based upon the outcomes of this study, students prefer digital imaging systems over visual systems, and like to have (more) education about digital tooth colour imaging. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Bleed-through correction for rendering and correlation analysis in multi-colour localization microscopy

    PubMed Central

    Kim, Dahan; Curthoys, Nikki M.; Parent, Matthew T.; Hess, Samuel T.

    2015-01-01

    Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. PMID:26185614

  11. Bleed-through correction for rendering and correlation analysis in multi-colour localization microscopy.

    PubMed

    Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T

    2013-09-01

    Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined.

  12. What Colour Is a Shadow?

    ERIC Educational Resources Information Center

    Hughes, S. W.

    2009-01-01

    What colour is a shadow? Black, grey, or some other colour? This article describes how to use a digital camera to test the hypothesis that a shadow under a clear blue sky has a blue tint. A white sheet of A4 paper was photographed in full sunlight and in shadow under a clear blue sky. The images were analysed using a shareware program called…

  13. Improved discrimination among similar agricultural plots using red-and-green-based pseudo-colour imaging

    NASA Astrophysics Data System (ADS)

    Doi, Ryoichi

    2016-04-01

    The effects of a pseudo-colour imaging method were investigated by discriminating among similar agricultural plots in remote sensing images acquired using the Airborne Visible/Infrared Imaging Spectrometer (Indiana, USA) and the Landsat 7 satellite (Fergana, Uzbekistan), and that provided by GoogleEarth (Toyama, Japan). From each dataset, red (R)-green (G)-R-G-blue yellow (RGrgbyB), and RGrgby-1B pseudo-colour images were prepared. From each, cyan, magenta, yellow, key black, L*, a*, and b* derivative grayscale images were generated. In the Airborne Visible/Infrared Imaging Spectrometer image, pixels were selected for corn no tillage (29 pixels), corn minimum tillage (27), and soybean (34) plots. Likewise, in the Landsat 7 image, pixels representing corn (73 pixels), cotton (110), and wheat (112) plots were selected, and in the GoogleEarth image, those representing soybean (118 pixels) and rice (151) were selected. When the 14 derivative grayscale images were used together with an RGB yellow grayscale image, the overall classification accuracy improved from 74 to 94% (Airborne Visible/Infrared Imaging Spectrometer), 64 to 83% (Landsat), or 77 to 90% (GoogleEarth). As an indicator of discriminatory power, the kappa significance improved 1018-fold (Airborne Visible/Infrared Imaging Spectrometer) or greater. The derivative grayscale images were found to increase the dimensionality and quantity of data. Herein, the details of the increases in dimensionality and quantity are further analysed and discussed.

  14. The role of tone and segmental information in visual-word recognition in Thai.

    PubMed

    Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira

    2017-07-01

    Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.

  15. Digital imaging and image analysis applied to numerical applications in forensic hair examination.

    PubMed

    Brooks, Elizabeth; Comber, Bruce; McNaught, Ian; Robertson, James

    2011-03-01

    A method that provides objective data to complement the hair analysts' microscopic observations, which is non-destructive, would be of obvious benefit in the forensic examination of hairs. This paper reports on the use of objective colour measurement and image analysis techniques of auto-montaged images. Brown Caucasian telogen scalp hairs were chosen as a stern test of the utility of these approaches. The results show the value of using auto-montaged images and the potential for the use of objective numerical measures of colour and pigmentation to complement microscopic observations. 2010. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Dramatic colour changes in a bird of paradise caused by uniquely structured breast feather barbules.

    PubMed

    Stavenga, Doekele G; Leertouwer, Hein L; Marshall, N Justin; Osorio, Daniel

    2011-07-22

    The breast-plate plumage of male Lawes' parotia (Parotia lawesii) produces dramatic colour changes when this bird of paradise displays on its forest-floor lek. We show that this effect is achieved not solely by the iridescence--that is an angular-dependent spectral shift of the reflected light--which is inherent in structural coloration, but is based on a unique anatomical modification of the breast-feather barbule. The barbules have a segmental structure, and in common with many other iridescent feathers, they contain stacked melanin rodlets surrounded by a keratin film. The unique property of the parotia barbules is their boomerang-like cross section. This allows each barbule to work as three coloured mirrors: a yellow-orange reflector in the plane of the feather, and two symmetrically positioned bluish reflectors at respective angles of about 30°. Movement during the parotia's courtship displays thereby achieves much larger and more abrupt colour changes than is possible with ordinary iridescent plumage. To our knowledge, this is the first example of multiple thin film or multi-layer reflectors incorporated in a single structure (engineered or biological). It nicely illustrates how subtle modification of the basic feather structure can achieve novel visual effects. The fact that the parotia's breast feathers seem to be specifically adapted to give much stronger colour changes than normal structural coloration implies that colour change is important in their courtship display.

  17. The objects of visuospatial short-term memory: Perceptual organization and change detection.

    PubMed

    Nikolova, Atanaska; Macken, Bill

    2016-01-01

    We used a colour change-detection paradigm where participants were required to remember colours of six equally spaced circles. Items were superimposed on a background so as to perceptually group them within (a) an intact ring-shaped object, (b) a physically segmented but perceptually completed ring-shaped object, or (c) a corresponding background segmented into three arc-shaped objects. A nonpredictive cue at the location of one of the circles was followed by the memory items, which in turn were followed by a test display containing a probe indicating the circle to be judged same/different. Reaction times for correct responses revealed a same-object advantage; correct responses were faster to probes on the same object as the cue than to equidistant probes on a segmented object. This same-object advantage was identical for physically and perceptually completed objects, but was only evident in reaction times, and not in accuracy measures. Not only, therefore, is it important to consider object-level perceptual organization of stimulus elements when assessing the influence of a range of factors (e.g., number and complexity of elements) in visuospatial short-term memory, but a more detailed picture of the structure of information in memory may be revealed by measuring speed as well as accuracy.

  18. The objects of visuospatial short-term memory: Perceptual organization and change detection

    PubMed Central

    Nikolova, Atanaska; Macken, Bill

    2016-01-01

    We used a colour change-detection paradigm where participants were required to remember colours of six equally spaced circles. Items were superimposed on a background so as to perceptually group them within (a) an intact ring-shaped object, (b) a physically segmented but perceptually completed ring-shaped object, or (c) a corresponding background segmented into three arc-shaped objects. A nonpredictive cue at the location of one of the circles was followed by the memory items, which in turn were followed by a test display containing a probe indicating the circle to be judged same/different. Reaction times for correct responses revealed a same-object advantage; correct responses were faster to probes on the same object as the cue than to equidistant probes on a segmented object. This same-object advantage was identical for physically and perceptually completed objects, but was only evident in reaction times, and not in accuracy measures. Not only, therefore, is it important to consider object-level perceptual organization of stimulus elements when assessing the influence of a range of factors (e.g., number and complexity of elements) in visuospatial short-term memory, but a more detailed picture of the structure of information in memory may be revealed by measuring speed as well as accuracy. PMID:26286369

  19. A feed-forward Hopfield neural network algorithm (FHNNA) with a colour satellite image for water quality mapping

    NASA Astrophysics Data System (ADS)

    Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar

    2016-06-01

    There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.

  20. Brain MR image segmentation using NAMS in pseudo-color.

    PubMed

    Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong

    2017-12-01

    Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.

  1. Adaptive introgression across species boundaries in Heliconius butterflies.

    PubMed

    Pardo-Diaz, Carolina; Salazar, Camilo; Baxter, Simon W; Merot, Claire; Figueiredo-Ready, Wilsea; Joron, Mathieu; McMillan, W Owen; Jiggins, Chris D

    2012-01-01

    It is widely documented that hybridisation occurs between many closely related species, but the importance of introgression in adaptive evolution remains unclear, especially in animals. Here, we have examined the role of introgressive hybridisation in transferring adaptations between mimetic Heliconius butterflies, taking advantage of the recent identification of a gene regulating red wing patterns in this genus. By sequencing regions both linked and unlinked to the red colour locus, we found a region that displays an almost perfect genotype by phenotype association across four species, H. melpomene, H. cydno, H. timareta, and H. heurippa. This particular segment is located 70 kb downstream of the red colour specification gene optix, and coalescent analysis indicates repeated introgression of adaptive alleles from H. melpomene into the H. cydno species clade. Our analytical methods complement recent genome scale data for the same region and suggest adaptive introgression has a crucial role in generating adaptive wing colour diversity in this group of butterflies.

  2. Colour thresholding and objective quantification in bioimaging

    NASA Technical Reports Server (NTRS)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  3. Hue-specific colour memory impairment in an individual with intact colour perception and colour naming.

    PubMed

    Jakobson, L S; Pearson, P M; Robertson, B

    2008-01-15

    Cases of hue-selective dyschomatopsias, together with the results of recent optical imaging studies [Xiao, Y., Casti, A. R. R., Xiao, J., & Kaplan, E. (2006). A spatially organized representation of colour in macaque primary visual cortex. Perception, 35, ECVP Abstract Supplement; Xiao, Y., Wang, Y., & Felleman, D. J. (2003). A spatially organized representation of colour in macaque cortical area V2. Nature, 421, 535-539], have provided support for the idea that different colours are processed in spatially distinct regions of extrastriate cortex. In the present report, we provide evidence suggesting that a similar, but distinct, map may exist for representations of colour in memory. This evidence comes from observations of a young woman (QP) who demonstrates an isolated deficit in colour memory secondary to a concussive episode. Despite having normal colour perception and colour naming skills, and above-average memory skills in other domains, QP's ability to recall visually encoded colour information over short retention intervals is dramatically impaired. Her long-term memory for colour and her colour imagery skills are also abnormal. Surprisingly, however, these impairments are not seen with all hues; specifically, her ability to remember or imagine blue shades is spared. This interesting case contributes to the literature suggesting that colour perception, naming, and memory can be clinically dissociated, and provides insights into the organization of colour information in memory.

  4. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  5. Two-colour live-cell nanoscale imaging of intracellular targets

    NASA Astrophysics Data System (ADS)

    Bottanelli, Francesca; Kromann, Emil B.; Allgeyer, Edward S.; Erdmann, Roman S.; Wood Baguley, Stephanie; Sirinakis, George; Schepartz, Alanna; Baddeley, David; Toomre, Derek K.; Rothman, James E.; Bewersdorf, Joerg

    2016-03-01

    Stimulated emission depletion (STED) nanoscopy allows observations of subcellular dynamics at the nanoscale. Applications have, however, been severely limited by the lack of a versatile STED-compatible two-colour labelling strategy for intracellular targets in living cells. Here we demonstrate a universal labelling method based on the organic, membrane-permeable dyes SiR and ATTO590 as Halo and SNAP substrates. SiR and ATTO590 constitute the first suitable dye pair for two-colour STED imaging in living cells below 50 nm resolution. We show applications with mitochondria, endoplasmic reticulum, plasma membrane and Golgi-localized proteins, and demonstrate continuous acquisition for up to 3 min at 2-s time resolution.

  6. An ecological alternative to Snodgrass & Vanderwart: 360 high quality colour images with norms for seven psycholinguistic variables.

    PubMed

    Moreno-Martínez, Francisco Javier; Montoro, Pedro R

    2012-01-01

    This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object- and word-processing, both in neurological patients and in healthy controls.

  7. IRIS COLOUR CLASSIFICATION SCALES – THEN AND NOW

    PubMed Central

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual’s eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale. PMID:27373112

  8. IRIS COLOUR CLASSIFICATION SCALES--THEN AND NOW.

    PubMed

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual's eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale.

  9. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Tests of commercial colour CMOS cameras for astronomical applications

    NASA Astrophysics Data System (ADS)

    Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.

    2013-12-01

    We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.

  11. Laser-induced plasmonic colours on metals

    NASA Astrophysics Data System (ADS)

    Guay, Jean-Michel; Calà Lesina, Antonino; Côté, Guillaume; Charron, Martin; Poitras, Daniel; Ramunno, Lora; Berini, Pierre; Weck, Arnaud

    2017-07-01

    Plasmonic resonances in metallic nanoparticles have been used since antiquity to colour glasses. The use of metal nanostructures for surface colourization has attracted considerable interest following recent developments in plasmonics. However, current top-down colourization methods are not ideally suited to large-scale industrial applications. Here we use a bottom-up approach where picosecond laser pulses can produce a full palette of non-iridescent colours on silver, gold, copper and aluminium. We demonstrate the process on silver coins weighing up to 5 kg and bearing large topographic variations (~1.5 cm). We find that colours are related to a single parameter, the total accumulated fluence, making the process suitable for high-throughput industrial applications. Statistical image analyses of laser-irradiated surfaces reveal various nanoparticle size distributions. Large-scale finite-difference time-domain computations based on these nanoparticle distributions reproduce trends seen in reflectance measurements, and demonstrate the key role of plasmonic resonances in colour formation.

  12. Laser-induced plasmonic colours on metals

    PubMed Central

    Guay, Jean-Michel; Calà Lesina, Antonino; Côté, Guillaume; Charron, Martin; Poitras, Daniel; Ramunno, Lora; Berini, Pierre; Weck, Arnaud

    2017-01-01

    Plasmonic resonances in metallic nanoparticles have been used since antiquity to colour glasses. The use of metal nanostructures for surface colourization has attracted considerable interest following recent developments in plasmonics. However, current top-down colourization methods are not ideally suited to large-scale industrial applications. Here we use a bottom-up approach where picosecond laser pulses can produce a full palette of non-iridescent colours on silver, gold, copper and aluminium. We demonstrate the process on silver coins weighing up to 5 kg and bearing large topographic variations (∼1.5 cm). We find that colours are related to a single parameter, the total accumulated fluence, making the process suitable for high-throughput industrial applications. Statistical image analyses of laser-irradiated surfaces reveal various nanoparticle size distributions. Large-scale finite-difference time-domain computations based on these nanoparticle distributions reproduce trends seen in reflectance measurements, and demonstrate the key role of plasmonic resonances in colour formation. PMID:28719576

  13. Keeping It in Three Dimensions: Measuring the Development of Mental Rotation in Children with the Rotated Colour Cube Test (RCCT)

    ERIC Educational Resources Information Center

    Lutke, Nikolay; Lange-Kuttner, Christiane

    2015-01-01

    This study introduces the new Rotated Colour Cube Test (RCCT) as a measure of object identification and mental rotation using single 3D colour cube images in a matching-to-sample procedure. One hundred 7- to 11-year-old children were tested with aligned or rotated cube models, distracters and targets. While different orientations of distracters…

  14. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    PubMed

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  15. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    PubMed Central

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  16. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  17. 3D digital image correlation using a single 3CCD colour camera and dichroic filter

    NASA Astrophysics Data System (ADS)

    Zhong, F. Q.; Shao, X. X.; Quan, C.

    2018-04-01

    In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.

  18. [Optic mixing of colours in Seurat's painting].

    PubMed

    Cernea, Paul

    2002-01-01

    Georges Seurat is the initiator and master of the divisionism. He founds the neoimpressionism current that tries to reproduce the nature exclusively through coloured vibration. Seurat applies the colours in small touches uniformly distributed on the canvas; the colours merge if they are looked by a certain distance, through optical interference. When the spectator approaches from the picture, the special frequency decreases, the optical merging does not appear and the onlooker looks a lot of coloured spots. When the spectator moves away from the picture, the optical interference appears and the clarity of the image becomes perfectly. This current opened the way of the future's modern painting performed by Cézanne, Renoir, Van Gogh.

  19. Robust colour calibration of an imaging system using a colour space transform and advanced regression modelling.

    PubMed

    Jackman, Patrick; Sun, Da-Wen; Elmasry, Gamal

    2012-08-01

    A new algorithm for the conversion of device dependent RGB colour data into device independent L*a*b* colour data without introducing noticeable error has been developed. By combining a linear colour space transform and advanced multiple regression methodologies it was possible to predict L*a*b* colour data with less than 2.2 colour units of error (CIE 1976). By transforming the red, green and blue colour components into new variables that better reflect the structure of the L*a*b* colour space, a low colour calibration error was immediately achieved (ΔE(CAL) = 14.1). Application of a range of regression models on the data further reduced the colour calibration error substantially (multilinear regression ΔE(CAL) = 5.4; response surface ΔE(CAL) = 2.9; PLSR ΔE(CAL) = 2.6; LASSO regression ΔE(CAL) = 2.1). Only the PLSR models deteriorated substantially under cross validation. The algorithm is adaptable and can be easily recalibrated to any working computer vision system. The algorithm was tested on a typical working laboratory computer vision system and delivered only a very marginal loss of colour information ΔE(CAL) = 2.35. Colour features derived on this system were able to safely discriminate between three classes of ham with 100% correct classification whereas colour features measured on a conventional colourimeter were not. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Creating 3D models of historical buildings using geospatial data

    NASA Astrophysics Data System (ADS)

    Alionescu, Adrian; Bǎlǎ, Alina Corina; Brebu, Floarea Maria; Moscovici, Anca-Maria

    2017-07-01

    Recently, a lot of interest has been shown to understand a real world object by acquiring its 3D images of using laser scanning technology and panoramic images. A realistic impression of geometric 3D data can be generated by draping real colour textures simultaneously captured by a colour camera images. In this context, a new concept of geospatial data acquisition has rapidly revolutionized the method of determining the spatial position of objects, which is based on panoramic images. This article describes an approach that comprises inusing terrestrial laser scanning and panoramic images captured with Trimble V10 Imaging Rover technology to enlarge the details and realism of the geospatial data set, in order to obtain 3D urban plans and virtual reality applications.

  1. Terahertz analysis of an East Asian historical mural painting

    NASA Astrophysics Data System (ADS)

    Fukunaga, K.; Hosako, I.; Kohdzuma, Y.; Koezuka, T.; Kim, M.-J.; Ikari, T.; Du, X.

    2010-05-01

    Terahertz (THz) spectroscopy and THz and imaging techniques are expected to have great potential for the non-invasive analysis of artworks. We have applied THz imaging to analyse the historic mural painting of a Lamaism temple by using a transportable time-domain THz imaging system; such an attempt is the first in the world. The reflection image revealed that there are two orange colours in the painting, although they appear the same to the naked eye. THz imaging can also estimate the depth of cracks. The colours were examined by X-ray fluorescence and Raman spectroscopy, and the results were found to be in good agreement. This work proved that THz imaging can contribute to the non-invasive analysis of cultural heritage.

  2. Technical report on semiautomatic segmentation using the Adobe Photoshop.

    PubMed

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae; Lee, Yong Sook; Har, Dong-Hwan

    2005-12-01

    The purpose of this research is to enable users to semiautomatically segment the anatomical structures in magnetic resonance images (MRIs), computerized tomographs (CTs), and other medical images on a personal computer. The segmented images are used for making 3D images, which are helpful to medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was scanned to make 557 MRIs. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL and manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a similar manner, 13 anatomical structures in 8,590 anatomical images were segmented. Proper segmentation was verified by making 3D images from the segmented images. Semiautomatic segmentation using Adobe Photoshop is expected to be widely used for segmentation of anatomical structures in various medical images.

  3. Automated detection of red lesions from digital colour fundus photographs.

    PubMed

    Jaafar, Hussain F; Nandi, Asoke K; Al-Nuaimy, Waleed

    2011-01-01

    Earliest signs of diabetic retinopathy, the major cause of vision loss, are damage to the blood vessels and the formation of lesions in the retina. Early detection of diabetic retinopathy is essential for the prevention of blindness. In this paper we present a computer-aided system to automatically identify red lesions from retinal fundus photographs. After pre-processing, a morphological technique was used to segment red lesion candidates from the background and other retinal structures. Then a rule-based classifier was used to discriminate actual red lesions from artifacts. A novel method for blood vessel detection is also proposed to refine the detection of red lesions. For a standarised test set of 219 images, the proposed method can detect red lesions with a sensitivity of 89.7% and a specificity of 98.6% (at lesion level). The performance of the proposed method shows considerable promise for detection of red lesions as well as other types of lesions.

  4. An unexpected cause of small bowel obstruction in an adult patient: midgut volvulus

    PubMed Central

    Söker, Gökhan; Yılmaz, Cengiz; Karateke, Faruk; Gülek, Bozkurt

    2014-01-01

    The most important complication of intestinal malrotation is midgut volvulus because it may lead to intestinal ischaemia and necrosis. A 29-year-old male patient was admitted to the emergency department with abdominal pain. Ultrasonography (US), colour Doppler ultrasonography (CDUS), CT and barium studies were carried out. On US and CDUS, twisting of intestinal segments around the superior mesenteric artery (SMA) and superior mesenteric vein (SMV) and alteration of the SMA–SMV relationship were detected. CT demonstrated that the small intestine was making a rotation around the SMA and SMV, which amounted to more than 360°. The upper gastrointestinal barium series revealed a corkscrew appearance of the duodenum and proximal jejunum, which is a pathognomonic finding of midgut volvulus. Prior knowledge of characteristic imaging findings of midgut volvulus is essential in order to reach proper diagnosis and establish proper treatment before the development of intestinal ischaemia and necrosis. PMID:24811563

  5. Colour agnosia impairs the recognition of natural but not of non-natural scenes.

    PubMed

    Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F

    2007-03-01

    Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.

  6. Review methods for image segmentation from computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less

  7. Evaluation of browning ratio in an image analysis of apple slices at different stages of instant controlled pressure drop-assisted hot-air drying (AD-DIC).

    PubMed

    Gao, Kun; Zhou, Linyan; Bi, Jinfeng; Yi, Jianyong; Wu, Xinye; Zhou, Mo; Wang, Xueyuan; Liu, Xuan

    2017-06-01

    Computer vision-based image analysis systems are widely used in food processing to evaluate quality changes. They are able to objectively measure the surface colour of various products since, providing some obvious advantages with their objectivity and quantitative capabilities. In this study, a computer vision-based image analysis system was used to investigate the colour changes of apple slices dried by instant controlled pressure drop-assisted hot air drying (AD-DIC). The CIE L* value and polyphenol oxidase activity in apple slices decreased during the entire drying process, whereas other colour indexes, including CIE a*, b*, ΔE and C* values, increased. The browning ratio calculated by image analysis increased during the drying process, and a sharp increment was observed for the DIC process. The change in 5-hydroxymethylfurfural (5-HMF) and fluorescent compounds (FIC) showed the same trend with browning ratio due to Maillard reaction. Moreover, the concentrations of 5-HMF and FIC both had a good quadratic correlation (R 2  > 0.998) with the browning ratio. Browning ratio was a reliable indicator of 5-HMF and FIC changes in apple slices during drying. The image analysis system could be used to monitor colour changes, 5-HMF and FIC in dehydrated apple slices during the AD-DIC process. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  8. [Color processing of ultrasonographic images in extracorporeal lithotripsy].

    PubMed

    Lardennois, B; Ziade, A; Walter, K

    1991-02-01

    A number of technical difficulties are encountered in the ultrasonographic detection of renal stones which unfortunately limit its performance. The margin of error of firing in extracorporeal shock-wave lithotripsy (ESWL) must be reduced to a minimum. The role of the ultrasonographic monitoring during lithotripsy is also essential: continuous control of the focussing of the short-wave beamand assessment if the quality of fragmentation. The authors propose to improve ultrasonographic imaging in ESWL by means of intraoperative colour processing of the stone. Each shot must be directed to its target with an economy of vision avoiding excessive fatigue. The principle of the technique consists of digitalization of the ultrasound video images using a Macintosh Mac 2 computer. The Graphis Paint II program is interfaced directly with the Quick Capture card and recovers the images on its work surface in real time. The program is then able to attribute to each of these 256 shades of grey any one of the 16.6 million colours of the Macintosh universe with specific intensity and saturation. During fragmentation, using the principle of a palette, the stone changes colour from green to red indicating complete fragmentation. A Color Space card converts the digital image obtained into a video analogue source which is visualized on the monitor. It can be superimposed and/or juxtaposed with the source image by means of a multi-standard mixing table. Colour processing of ultrasonographic images in extracoporeal shockwave lithotripsy allows better visualization of the stones and better follow-up of fragmentation and allows the shockwave treatment to be stopped earlier. It increases the stone-free performance at 6 months. This configuration will eventually be able to integrate into the ultrasound apparatus itself.

  9. Segmentation Fusion Techniques with Application to Plenoptic Images: A Survey.

    NASA Astrophysics Data System (ADS)

    Evin, D.; Hadad, A.; Solano, A.; Drozdowicz, B.

    2016-04-01

    The segmentation of anatomical and pathological structures plays a key role in the characterization of clinically relevant evidence from digital images. Recently, plenoptic imaging has emerged as a new promise to enrich the diagnostic potential of conventional photography. Since the plenoptic images comprises a set of slightly different versions of the target scene, we propose to make use of those images to improve the segmentation quality in relation to the scenario of a single image segmentation. The problem of finding a segmentation solution from multiple images of a single scene, is called segmentation fusion. This paper reviews the issue of segmentation fusion in order to find solutions that can be applied to plenoptic images, particularly images from the ophthalmological domain.

  10. Relevance of 19th century continuous tone photomechanical printing techniques to digitally generated imagery

    NASA Astrophysics Data System (ADS)

    Hoskins, Stephen; Thirkell, Paul

    2003-01-01

    Collotype and Woodburytype are late 19th early 20th century continuous tone methods of reproducing photography in print, which do not have an underlying dot structure. The aesthetic and tactile qualities produced by these methods at their best, have never been surpassed. Woodburytype is the only photomechanical print process using a printing matrix and ink, that is capable of rendering true continuous tone; it also has the characteristic of rendering a photographic image by mapping a three-dimensional surface topography. Collotype"s absence of an underlying dot structure enables an image to be printed in as many colours as desired without creating any form of interference structure. Research at the Centre for Fine Print Research, UWE Bristol aims to recreate these processes for artists and photographers and assess their potential to create a digitally generated image printed in full colour and continuous tone that will not fade or deteriorate. Through this research the Centre seeks to provide a context in which the development of current four-colour CMYK printing may be viewed as an expedient rather than a logical route for the development of colour printing within the framework of digitally generated hard copy paper output.

  11. Using different classification models in wheat grading utilizing visual features

    NASA Astrophysics Data System (ADS)

    Basati, Zahra; Rasekh, Mansour; Abbaspour-Gilandeh, Yousef

    2018-04-01

    Wheat is one of the most important strategic crops in Iran and in the world. The major component that distinguishes wheat from other grains is the gluten section. In Iran, sunn pest is one of the most important factors influencing the characteristics of wheat gluten and in removing it from a balanced state. The existence of bug-damaged grains in wheat will reduce the quality and price of the product. In addition, damaged grains reduce the enrichment of wheat and the quality of bread products. In this study, after preprocessing and segmentation of images, 25 features including 9 colour features, 10 morphological features, and 6 textual statistical features were extracted so as to classify healthy and bug-damaged wheat grains of Azar cultivar of four levels of moisture content (9, 11.5, 14 and 16.5% w.b.) and two lighting colours (yellow light, the composition of yellow and white lights). Using feature selection methods in the WEKA software and the CfsSubsetEval evaluator, 11 features were chosen as inputs of artificial neural network, decision tree and discriment analysis classifiers. The results showed that the decision tree with the J.48 algorithm had the highest classification accuracy of 90.20%. This was followed by artificial neural network classifier with the topology of 11-19-2 and discrimient analysis classifier at 87.46 and 81.81%, respectively

  12. Dramatic colour changes in a bird of paradise caused by uniquely structured breast feather barbules

    PubMed Central

    Stavenga, Doekele G.; Leertouwer, Hein L.; Marshall, N. Justin; Osorio, Daniel

    2011-01-01

    The breast-plate plumage of male Lawes' parotia (Parotia lawesii) produces dramatic colour changes when this bird of paradise displays on its forest-floor lek. We show that this effect is achieved not solely by the iridescence—that is an angular-dependent spectral shift of the reflected light—which is inherent in structural coloration, but is based on a unique anatomical modification of the breast-feather barbule. The barbules have a segmental structure, and in common with many other iridescent feathers, they contain stacked melanin rodlets surrounded by a keratin film. The unique property of the parotia barbules is their boomerang-like cross section. This allows each barbule to work as three coloured mirrors: a yellow-orange reflector in the plane of the feather, and two symmetrically positioned bluish reflectors at respective angles of about 30°. Movement during the parotia's courtship displays thereby achieves much larger and more abrupt colour changes than is possible with ordinary iridescent plumage. To our knowledge, this is the first example of multiple thin film or multi-layer reflectors incorporated in a single structure (engineered or biological). It nicely illustrates how subtle modification of the basic feather structure can achieve novel visual effects. The fact that the parotia's breast feathers seem to be specifically adapted to give much stronger colour changes than normal structural coloration implies that colour change is important in their courtship display. PMID:21159676

  13. Northern Gulf of Mexico estuarine coloured dissolved organic matter derived from MODIS data

    EPA Science Inventory

    Coloured dissolved organic matter (CDOM) is relevant for water quality management and may become an important measure to complement future water quality assessment programmes. An approach to derive CDOM using the Moderate Resolution Imaging Spectroradiometer (MODIS) was developed...

  14. Skin image retrieval using Gabor wavelet texture feature.

    PubMed

    Ou, X; Pan, W; Zhang, X; Xiao, P

    2016-12-01

    Skin imaging plays a key role in many clinical studies. We have used many skin imaging techniques, including the recently developed capacitive contact skin imaging based on fingerprint sensors. The aim of this study was to develop an effective skin image retrieval technique using Gabor wavelet transform, which can be used on different types of skin images, but with a special focus on skin capacitive contact images. Content-based image retrieval (CBIR) is a useful technology to retrieve stored images from database by supplying query images. In a typical CBIR, images are retrieved based on colour, shape, texture, etc. In this study, texture feature is used for retrieving skin images, and Gabor wavelet transform is used for texture feature description and extraction. The results show that the Gabor wavelet texture features can work efficiently on different types of skin images. Although Gabor wavelet transform is slower compared with other image retrieval techniques, such as principal component analysis (PCA) and grey-level co-occurrence matrix (GLCM), Gabor wavelet transform is the best for retrieving skin capacitive contact images and facial images with different orientations. Gabor wavelet transform can also work well on facial images with different expressions and skin cancer/disease images. We have developed an effective skin image retrieval method based on Gabor wavelet transform, that it is useful for retrieving different types of images, namely digital colour face images, digital colour skin cancer and skin disease images, and particularly greyscale skin capacitive contact images. Gabor wavelet transform can also be potentially useful for face recognition (with different orientation and expressions) and skin cancer/disease diagnosis. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  15. Two-colour X-gamma ray inverse Compton back-scattering source

    NASA Astrophysics Data System (ADS)

    Drebot, I.; Petrillo, V.; Serafini, L.

    2017-10-01

    We present a simple and new scheme for producing two-colour Thomson/Compton radiation with the possibility of controlling separately the polarization of the two different colours, based on the interaction of one single electron beam with two light pulses that can come from the same laser setup or from two different lasers and that collide with the electrons at different angle. One of the most interesting cases for medical applications is to provide two X-ray pulses across the iodine K-edge at 33.2 keV. The iodine is used as contrast medium in various imaging techniques and the availability of two spectral lines accross the K-edge allows one to produce subtraction images with a great increase in accuracy.

  16. An interactive medical image segmentation framework using iterative refinement.

    PubMed

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Detection experiments with humans implicate visual predation as a driver of colour polymorphism dynamics in pygmy grasshoppers

    PubMed Central

    2013-01-01

    Background Animal colour patterns offer good model systems for studies of biodiversity and evolution of local adaptations. An increasingly popular approach to study the role of selection for camouflage for evolutionary trajectories of animal colour patterns is to present images of prey on paper or computer screens to human ‘predators’. Yet, few attempts have been made to confirm that rates of detection by humans can predict patterns of selection and evolutionary modifications of prey colour patterns in nature. In this study, we first analyzed encounters between human ‘predators’ and images of natural black, grey and striped colour morphs of the polymorphic Tetrix subulata pygmy grasshoppers presented on background images of unburnt, intermediate or completely burnt natural habitats. Next, we compared detection rates with estimates of capture probabilities and survival of free-ranging grasshoppers, and with estimates of relative morph frequencies in natural populations. Results The proportion of grasshoppers that were detected and time to detection depended on both the colour pattern of the prey and on the type of visual background. Grasshoppers were detected more often and faster on unburnt backgrounds than on 50% and 100% burnt backgrounds. Striped prey were detected less often than grey or black prey on unburnt backgrounds; grey prey were detected more often than black or striped prey on 50% burnt backgrounds; and black prey were detected less often than grey prey on 100% burnt backgrounds. Rates of detection mirrored previously reported rates of capture by humans of free-ranging grasshoppers, as well as morph specific survival in the wild. Rates of detection were also correlated with frequencies of striped, black and grey morphs in samples of T. subulata from natural populations that occupied the three habitat types used for the detection experiment. Conclusions Our findings demonstrate that crypsis is background-dependent, and implicate visual predation as an important driver of evolutionary modifications of colour polymorphism in pygmy grasshoppers. Our study provides the clearest evidence to date that using humans as ‘predators’ in detection experiments may provide reliable information on the protective values of prey colour patterns and of natural selection and microevolution of camouflage in the wild. PMID:23639215

  18. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  19. Up Scalable Full Colour Plasmonic Pixels with Controllable Hue, Brightness and Saturation.

    PubMed

    Mudachathi, Renilkumar; Tanaka, Takuo

    2017-04-26

    It has long been the interests of scientists to develop ink free colour printing technique using nano structured materials inspired by brilliant colours found in many creatures like butterflies and peacocks. Recently isolated metal nano structures exhibiting preferential light absorption and scattering have been explored as a promising candidate for this emerging field. Applying such structures in practical use, however, demands the production of individual colours with distinct reflective peaks, tunable across the visible wavelength region combined with controllable colour attributes and economically feasible fabrication. Herein, we present a simple yet efficient colour printing approach employing sub-micrometer scale plasmonic pixels of single constituent metal structure which supports near unity broadband light absorption at two distinct wavelengths, facilitating the creation of saturated colours. The dependence of these resonances on two different parameters of the same pixel enables controllable colour attributes such as hue, brightness and saturation across the visible spectrum. The linear dependence of colour attributes on the pixel parameters eases the automation; which combined with the use of inexpensive and stable aluminum as functional material will make this colour design strategy relevant for use in various commercial applications like printing micro images for security purposes, consumer product colouration and functionalized decoration to name a few.

  20. Identification of important image features for pork and turkey ham classification using colour and wavelet texture features and genetic selection.

    PubMed

    Jackman, Patrick; Sun, Da-Wen; Allen, Paul; Valous, Nektarios A; Mendoza, Fernando; Ward, Paddy

    2010-04-01

    A method to discriminate between various grades of pork and turkey ham was developed using colour and wavelet texture features. Image analysis methods originally developed for predicting the palatability of beef were applied to rapidly identify the ham grade. With high quality digital images of 50-94 slices per ham it was possible to identify the greyscale that best expressed the differences between the various ham grades. The best 10 discriminating image features were then found with a genetic algorithm. Using the best 10 image features, simple linear discriminant analysis models produced 100% correct classifications for both pork and turkey on both calibration and validation sets. 2009 Elsevier Ltd. All rights reserved.

  1. Identification of uncommon objects in containers

    DOEpatents

    Bremer, Peer-Timo; Kim, Hyojin; Thiagarajan, Jayaraman J.

    2017-09-12

    A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.

  2. Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.

    2005-01-01

    NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.

  3. Using deep learning in image hyper spectral segmentation, classification, and detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Su, Zhenyu

    2018-02-01

    Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.

  4. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  5. Topical local anaesthetics (EMLA) inhibit burn-induced plasma extravasation as measured by digital image colour analysis.

    PubMed

    Jönsson, A; Mattsson, U; Tarnow, P; Nellgård, P; Cassuto, J

    1998-06-01

    Amide local anaesthetics have previously been shown to reduce oedema and improve dermal perfusion following experimental burns. Previous studies have used invasive techniques for burn oedema quantification which do not allow continuous monitoring in the same animal. The present study used digital image colour analysis to investigate the effect of topical local anaesthetics on burn-induced extravasation of Evans blue albumin. A standardised full-thickness burn injury (1 x 1 cm) was induced in the abdominal skin of anaesthetised rats. The burn area was subsequently covered with 0.5 g of lidocaine-prilocaine cream 5% (25 mg of each in 1 g; EMLA, ASTRA, Sweden) or placebo cream during the first hour post-burn. One hour after the burn trauma, animals received Evans blue dye intravenously. Skin colour appearances were recorded by macrophotography before the burn and 5, 60. 65, 90, 120, 150, and 180 min post-burn. Colour slides were digitised and colour changes were analysed using the normalised red-green-blue (n-rgb) colour system. Results showed a significant inhibition of Evans blue extravasation between 60 and 180 min post-burn in EMLA-treated animals versus controls. Topical local anaesthetics are potent inhibitors of burn-induced plasma albumin extravasation, probably by direct action on vascular permeability and by inhibition of various steps of the pathophysiological response after burn injury.

  6. Site-directed mutagenesis of firefly luciferase: implication of conserved residue(s) in bioluminescence emission spectra among firefly luciferases.

    PubMed

    Tafreshi, Narges Kh; Sadeghizadeh, Majid; Emamzadeh, Rahman; Ranjbar, Bijan; Naderi-Manesh, Hossein; Hosseinkhani, Saman

    2008-05-15

    The bioluminescence colours of firefly luciferases are determined by assay conditions and luciferase structure. Owing to red light having lower energy than green light and being less absorbed by biological tissues, red-emitting luciferases have been considered as useful reporters in imaging technology. A set of red-emitting mutants of Lampyris turkestanicus (Iranian firefly) luciferase has been made by site-directed mutagenesis. Among different beetle luciferases, those from Phrixothrix (railroad worm) emit either green or red bioluminescence colours naturally. By substitution of three specific amino acids using site-specific mutagenesis in a green-emitting luciferase (from L. turkestanicus), the colour of emitted light was changed to red concomitant with decreasing decay rate. Different specific mutations (H245N, S284T and H431Y) led to changes in the bioluminescence colour. Meanwhile, the luciferase reaction took place with relative retention of its basic kinetic properties such as K(m) and relative activity. Structural comparison of the native and mutant luciferases using intrinsic fluorescence, far-UV CD spectra and homology modelling revealed a significant conformational change in mutant forms. A change in the colour of emitted light indicates the critical role of these conserved residues in bioluminescence colour determination among firefly luciferases. Relatively high specific activity and emission of red light might make these mutants suitable as reporters for the study of gene expression and bioluminescence imaging.

  7. Cigarette brand variant portfolio strategy and the use of colour in a darkening market.

    PubMed

    Greenland, Steven J

    2015-03-01

    To evaluate cigarette branding strategies used to segment a market with some of the toughest tobacco controls. To document brand variant and packaging portfolios and assess the role played by colour before plain packaging, as well as consider the threat that recently implemented legislation poses for tobacco manufacturers. Brand variant and packaging details were extracted from manufacturer ingredient reports, as well as a retail audit of Australian supermarkets. Details were also collected for other product categories to provide perspective on cigarette portfolios. Secondary and primary data sources were analysed to evaluate variant and packaging portfolio strategy. In Australia, 12 leading cigarette brands supported 120 brand variants. Of these 61 had names with a specific colour and a further 26 had names with colour connotation. There were 338 corresponding packaging configurations, with most variants available in the primary cigarette distribution channel in four pack size options. Tobacco companies microsegment Australian consumers with highly differentiated product offerings and a family branding strategy that helps ameliorate the effects of marketing restrictions. To date, tobacco controls have had little negative impact upon variant and packaging portfolios, which have continued to expand. Colour has become a key visual signifier differentiating one variant from the next, and colour names are used to extend brand lines. However, the role of colour, as a heuristic to simplify consumer decision-making processes, becomes largely redundant with plain packaging. Plain packaging's impact upon manufacturers' branding strategies is therefore likely to be significant. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  9. Metric Learning to Enhance Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Castano, Rebecca; Bue, Brian; Gilmore, Martha S.

    2013-01-01

    Unsupervised hyperspectral image segmentation can reveal spatial trends that show the physical structure of the scene to an analyst. They highlight borders and reveal areas of homogeneity and change. Segmentations are independently helpful for object recognition, and assist with automated production of symbolic maps. Additionally, a good segmentation can dramatically reduce the number of effective spectra in an image, enabling analyses that would otherwise be computationally prohibitive. Specifically, using an over-segmentation of the image instead of individual pixels can reduce noise and potentially improve the results of statistical post-analysis. In this innovation, a metric learning approach is presented to improve the performance of unsupervised hyperspectral image segmentation. The prototype demonstrations attempt a superpixel segmentation in which the image is conservatively over-segmented; that is, the single surface features may be split into multiple segments, but each individual segment, or superpixel, is ensured to have homogenous mineralogy.

  10. Image Segmentation Using Minimum Spanning Tree

    NASA Astrophysics Data System (ADS)

    Dewi, M. P.; Armiati, A.; Alvini, S.

    2018-04-01

    This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.

  11. An Ecological Alternative to Snodgrass & Vanderwart: 360 High Quality Colour Images with Norms for Seven Psycholinguistic Variables

    PubMed Central

    Moreno-Martínez, Francisco Javier; Montoro, Pedro R.

    2012-01-01

    This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object-and word- processing, both in neurological patients and in healthy controls. PMID:22662166

  12. Computer-aided diagnosis based on enhancement of degraded fundus photographs.

    PubMed

    Jin, Kai; Zhou, Mei; Wang, Shaoze; Lou, Lixia; Xu, Yufeng; Ye, Juan; Qian, Dahong

    2018-05-01

    Retinal imaging is an important and effective tool for detecting retinal diseases. However, degraded images caused by the aberrations of the eye can disguise lesions, so that a diseased eye can be mistakenly diagnosed as normal. In this work, we propose a new image enhancement method to improve the quality of degraded images. A new method is used to enhance degraded-quality fundus images. In this method, the image is converted from the input RGB colour space to LAB colour space and then each normalized component is enhanced using contrast-limited adaptive histogram equalization. Human visual system (HVS)-based fundus image quality assessment, combined with diagnosis by experts, is used to evaluate the enhancement. The study included 191 degraded-quality fundus photographs of 143 subjects with optic media opacity. Objective quality assessment of image enhancement (range: 0-1) indicated that our method improved colour retinal image quality from an average of 0.0773 (variance 0.0801) to an average of 0.3973 (variance 0.0756). Following enhancement, area under curves (AUC) were 0.996 for the glaucoma classifier, 0.989 for the diabetic retinopathy (DR) classifier, 0.975 for the age-related macular degeneration (AMD) classifier and 0.979 for the other retinal diseases classifier. The relatively simple method for enhancing degraded-quality fundus images achieves superior image enhancement, as demonstrated in a qualitative HVS-based image quality assessment. This retinal image enhancement may, therefore, be employed to assist ophthalmologists in more efficient screening of retinal diseases and the development of computer-aided diagnosis. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. a New Improved Threshold Segmentation Method for Scanning Images of Reservoir Rocks Considering Pore Fractal Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua

    Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.

  14. Super-multiplex vibrational imaging

    NASA Astrophysics Data System (ADS)

    Wei, Lu; Chen, Zhixing; Shi, Lixue; Long, Rong; Anzalone, Andrew V.; Zhang, Luyuan; Hu, Fanghao; Yuste, Rafael; Cornish, Virginia W.; Min, Wei

    2017-04-01

    The ability to visualize directly a large number of distinct molecular species inside cells is increasingly essential for understanding complex systems and processes. Even though existing methods have successfully been used to explore structure-function relationships in nervous systems, to profile RNA in situ, to reveal the heterogeneity of tumour microenvironments and to study dynamic macromolecular assembly, it remains challenging to image many species with high selectivity and sensitivity under biological conditions. For instance, fluorescence microscopy faces a ‘colour barrier’, owing to the intrinsically broad (about 1,500 inverse centimetres) and featureless nature of fluorescence spectra that limits the number of resolvable colours to two to five (or seven to nine if using complicated instrumentation and analysis). Spontaneous Raman microscopy probes vibrational transitions with much narrower resonances (peak width of about 10 inverse centimetres) and so does not suffer from this problem, but weak signals make many bio-imaging applications impossible. Although surface-enhanced Raman scattering offers high sensitivity and multiplicity, it cannot be readily used to image specific molecular targets quantitatively inside live cells. Here we use stimulated Raman scattering under electronic pre-resonance conditions to image target molecules inside living cells with very high vibrational selectivity and sensitivity (down to 250 nanomolar with a time constant of 1 millisecond). We create a palette of triple-bond-conjugated near-infrared dyes that each displays a single peak in the cell-silent Raman spectral window; when combined with available fluorescent probes, this palette provides 24 resolvable colours, with the potential for further expansion. Proof-of-principle experiments on neuronal co-cultures and brain tissues reveal cell-type-dependent heterogeneities in DNA and protein metabolism under physiological and pathological conditions, underscoring the potential of this 24-colour (super-multiplex) optical imaging approach for elucidating intricate interactions in complex biological systems.

  15. Perception of Safety and Liking Associated to the Colour Intervention of Bike Lanes: Contribution from the Behavioural Sciences to Urban Design and Wellbeing.

    PubMed

    Vera-Villarroel, Pablo; Contreras, Daniela; Lillo, Sebastián; Beyle, Christian; Segovia, Ariel; Rojo, Natalia; Moreno, Sandra; Oyarzo, Francisco

    2016-01-01

    The perception of colour and its subjective effects are key issues to designing safe and enjoyable bike lanes. This paper addresses the relationship between the colours of bike lane interventions-in particular pavement painting and intersection design-and the subjective evaluation of liking, visual saliency, and perceived safety related to such an intervention. Utilising images of three real bike lane intersections modified by software to change their colour (five in total), this study recruited 538 participants to assess their perception of all fifteen colour-design combinations. A multivariate analysis of covariance (MANCOVA) with the Bonferroni post hoc test was performed to assess the effect of the main conditions (colour and design) on the dependent variables (liking towards the intervention, level of visual saliency of the intersection, and perceived safety of the bike lane). The results showed that the colour red was more positively associated to the outcome variables, followed by yellow and blue. Additionally, it was observed that the effect of colour widely outweighs the effect of design, suggesting that the right choice and use of colour would increase the effectiveness on bike-lanes pavement interventions. Limitations and future directions are discussed.

  16. From spectral information to animal colour vision: experiments and concepts.

    PubMed

    Kelber, Almut; Osorio, Daniel

    2010-06-07

    Many animals use the spectral distribution of light to guide behaviour, but whether they have colour vision has been debated for over a century. Our strong subjective experience of colour and the fact that human vision is the paradigm for colour science inevitably raises the question of how we compare with other species. This article outlines four grades of 'colour vision' that can be related to the behavioural uses of spectral information, and perhaps to the underlying mechanisms. In the first, even without an (image-forming) eye, simple organisms can compare photoreceptor signals to locate a desired light environment. At the next grade, chromatic mechanisms along with spatial vision guide innate preferences for objects such as food or mates; this is sometimes described as wavelength-specific behaviour. Here, we compare the capabilities of di- and trichromatic vision, and ask why some animals have more than three spectral types of receptors. Behaviours guided by innate preferences are then distinguished from a grade that allows learning, in part because the ability to learn an arbitrary colour is evidence for a neural representation of colour. The fourth grade concerns colour appearance rather than colour difference: for instance, the distinction between hue and saturation, and colour categorization. These higher-level phenomena are essential to human colour perception but poorly known in animals, and we suggest how they can be studied. Finally, we observe that awareness of colour and colour qualia cannot be easily tested in animals.

  17. Techniques on semiautomatic segmentation using the Adobe Photoshop

    NASA Astrophysics Data System (ADS)

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae

    2005-04-01

    The purpose of this research is to enable anybody to semiautomatically segment the anatomical structures in the MRIs, CTs, and other medical images on the personal computer. The segmented images are used for making three-dimensional images, which are helpful in medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was MR scanned to make 557 MRIs, which were transferred to a personal computer. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL; successively, manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a likewise manner, 11 anatomical structures in the 8,500 anatomcial images were segmented. Also, 12 brain and 10 heart anatomical structures in anatomical images were segmented. Proper segmentation was verified by making and examining the coronal, sagittal, and three-dimensional images from the segmented images. During semiautomatic segmentation on Adobe Photoshop, suitable algorithm could be used, the extent of automatization could be regulated, convenient user interface could be used, and software bugs rarely occurred. The techniques of semiautomatic segmentation using Adobe Photoshop are expected to be widely used for segmentation of the anatomical structures in various medical images.

  18. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  19. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.

  20. Development of a semi-automated combined PET and CT lung lesion segmentation framework

    NASA Astrophysics Data System (ADS)

    Rossi, Farli; Mokri, Siti Salasiah; Rahni, Ashrani Aizzuddin Abd.

    2017-03-01

    Segmentation is one of the most important steps in automated medical diagnosis applications, which affects the accuracy of the overall system. In this paper, we propose a semi-automated segmentation method for extracting lung lesions from thoracic PET/CT images by combining low level processing and active contour techniques. The lesions are first segmented in PET images which are first converted to standardised uptake values (SUVs). The segmented PET images then serve as an initial contour for subsequent active contour segmentation of corresponding CT images. To evaluate its accuracy, the Jaccard Index (JI) was used as a measure of the accuracy of the segmented lesion compared to alternative segmentations from the QIN lung CT segmentation challenge, which is possible by registering the whole body PET/CT images to the corresponding thoracic CT images. The results show that our proposed technique has acceptable accuracy in lung lesion segmentation with JI values of around 0.8, especially when considering the variability of the alternative segmentations.

  1. Intelligent multi-spectral IR image segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert

    2017-05-01

    This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.

  2. a Method for the Registration of Hemispherical Photographs and Tls Intensity Images

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Schilling, A.; Maas, H.-G.

    2012-07-01

    Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.

  3. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME through the presence of exudation. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME.more » This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing (e.g., the classifier was trained on an independent dataset and tested on MESSIDOR). Our algorithm obtained an AUC between 0.88 and 0.94 depending on the dataset/features used. Additionally, it does not need ground truth at lesion level to reject false positives and is computationally efficient, as it generates a diagnosis on an average of 4.4 s (9.3 s, considering the optic nerve localization) per image on an 2.6 GHz platform with an unoptimized Matlab implementation.« less

  4. Image Information Mining Utilizing Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  5. Aberration correction in wide-field fluorescence microscopy by segmented-pupil image interferometry.

    PubMed

    Scrimgeour, Jan; Curtis, Jennifer E

    2012-06-18

    We present a new technique for the correction of optical aberrations in wide-field fluorescence microscopy. Segmented-Pupil Image Interferometry (SPII) uses a liquid crystal spatial light modulator placed in the microscope's pupil plane to split the wavefront originating from a fluorescent object into an array of individual beams. Distortion of the wavefront arising from either system or sample aberrations results in displacement of the images formed from the individual pupil segments. Analysis of image registration allows for the local tilt in the wavefront at each segment to be corrected with respect to a central reference. A second correction step optimizes the image intensity by adjusting the relative phase of each pupil segment through image interferometry. This ensures that constructive interference between all segments is achieved at the image plane. Improvements in image quality are observed when Segmented-Pupil Image Interferometry is applied to correct aberrations arising from the microscope's optical path.

  6. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.

  7. Cueing Complex Animations: Does Direction of Attention Foster Learning Processes?

    ERIC Educational Resources Information Center

    Lowe, Richard; Boucheix, Jean-Michel

    2011-01-01

    The time course of learners' processing of a complex animation was studied using a dynamic diagram of a piano mechanism. Over successive repetitions of the material, two forms of cueing (standard colour cueing and anti-cueing) were administered either before or during the animated segment of the presentation. An uncued group and two other control…

  8. Multi-particle three-dimensional coordinate estimation in real-time optical manipulation

    NASA Astrophysics Data System (ADS)

    Dam, J. S.; Perch-Nielsen, I.; Palima, D.; Gluckstad, J.

    2009-11-01

    We have previously shown how stereoscopic images can be obtained in our three-dimensional optical micromanipulation system [J. S. Dam et al, Opt. Express 16, 7244 (2008)]. Here, we present an extension and application of this principle to automatically gather the three-dimensional coordinates for all trapped particles with high tracking range and high reliability without requiring user calibration. Through deconvolving of the red, green, and blue colour planes to correct for bleeding between colour planes, we show that we can extend the system to also utilize green illumination, in addition to the blue and red. Applying the green colour as on-axis illumination yields redundant information for enhanced error correction, which is used to verify the gathered data, resulting in reliable coordinates as well as producing visually attractive images.

  9. Image segmentation using fuzzy LVQ clustering networks

    NASA Technical Reports Server (NTRS)

    Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.

    1992-01-01

    In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.

  10. Adaptive Introgression across Species Boundaries in Heliconius Butterflies

    PubMed Central

    Pardo-Diaz, Carolina; Salazar, Camilo; Baxter, Simon W.; Merot, Claire; Figueiredo-Ready, Wilsea; Joron, Mathieu; McMillan, W. Owen; Jiggins, Chris D.

    2012-01-01

    It is widely documented that hybridisation occurs between many closely related species, but the importance of introgression in adaptive evolution remains unclear, especially in animals. Here, we have examined the role of introgressive hybridisation in transferring adaptations between mimetic Heliconius butterflies, taking advantage of the recent identification of a gene regulating red wing patterns in this genus. By sequencing regions both linked and unlinked to the red colour locus, we found a region that displays an almost perfect genotype by phenotype association across four species, H. melpomene, H. cydno, H. timareta, and H. heurippa. This particular segment is located 70 kb downstream of the red colour specification gene optix, and coalescent analysis indicates repeated introgression of adaptive alleles from H. melpomene into the H. cydno species clade. Our analytical methods complement recent genome scale data for the same region and suggest adaptive introgression has a crucial role in generating adaptive wing colour diversity in this group of butterflies. PMID:22737081

  11. Perceiving colour at a glimpse: the relevance of where one fixates.

    PubMed

    Brenner, Eli; Granzier, Jeroen J M; Smeets, Jeroen B J

    2007-09-01

    We used classification images to examine whether certain parts of a surface are particularly important when judging its colour, such as its centre, its edges, or where one is looking. The scene consisted of a regular pattern of square tiles with random colours from along a short line in colour space. Targets defined by a square array of brighter tiles were presented for 200ms. The colours of the tiles within the target were biased by an amount that led to about 70% of the responses being correct. Subjects fixated a point that fell within the target's lower left quadrant and reported each target's colour. They tended to report the colour of the tiles near the fixation point. The influence of the tiles' colour reversed at the target's border and was weaker outside the target. The colour at the border itself was not particularly important. When coloured tiles were also presented before (and after) target presentation they had an opposite (but weaker) effect, indicating that the change in colour is important. Comparing the influence of tiles outside the target with that of tiles at the position at which the target would soon appear suggests that when judging surface colours during the short "glimpses" between saccades, temporal comparisons can be at least as important as spatial ones. We conclude that eye movements are important for colour vision, both because they determine which part of the surface of interest will be given most weight and because the perceived colour of such a surface also depends on what one looked at last.

  12. An image segmentation method for apple sorting and grading using support vector machine and Otsu's method

    USDA-ARS?s Scientific Manuscript database

    Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...

  13. Multiple Hypotheses Image Segmentation and Classification With Application to Dietary Assessment

    PubMed Central

    Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2016-01-01

    We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier’s confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback. PMID:25561457

  14. Multiple hypotheses image segmentation and classification with application to dietary assessment.

    PubMed

    Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J; Delp, Edward J

    2015-01-01

    We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier's confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback.

  15. Scalable Joint Segmentation and Registration Framework for Infant Brain Images.

    PubMed

    Dong, Pei; Wang, Li; Lin, Weili; Shen, Dinggang; Wu, Guorong

    2017-03-15

    The first year of life is the most dynamic and perhaps the most critical phase of postnatal brain development. The ability to accurately measure structure changes is critical in early brain development study, which highly relies on the performances of image segmentation and registration techniques. However, either infant image segmentation or registration, if deployed independently, encounters much more challenges than segmentation/registration of adult brains due to dynamic appearance change with rapid brain development. In fact, image segmentation and registration of infant images can assists each other to overcome the above challenges by using the growth trajectories (i.e., temporal correspondences) learned from a large set of training subjects with complete longitudinal data. Specifically, a one-year-old image with ground-truth tissue segmentation can be first set as the reference domain. Then, to register the infant image of a new subject at earlier age, we can estimate its tissue probability maps, i.e., with sparse patch-based multi-atlas label fusion technique, where only the training images at the respective age are considered as atlases since they have similar image appearance. Next, these probability maps can be fused as a good initialization to guide the level set segmentation. Thus, image registration between the new infant image and the reference image is free of difficulty of appearance changes, by establishing correspondences upon the reasonably segmented images. Importantly, the segmentation of new infant image can be further enhanced by propagating the much more reliable label fusion heuristics at the reference domain to the corresponding location of the new infant image via the learned growth trajectories, which brings image segmentation and registration to assist each other. It is worth noting that our joint segmentation and registration framework is also flexible to handle the registration of any two infant images even with significant age gap in the first year of life, by linking their joint segmentation and registration through the reference domain. Thus, our proposed joint segmentation and registration method is scalable to various registration tasks in early brain development studies. Promising segmentation and registration results have been achieved for infant brain MR images aged from 2-week-old to 1-year-old, indicating the applicability of our method in early brain development study.

  16. Electronic eye for the prediction of parameters related to grape ripening.

    PubMed

    Orlandi, G; Calvini, R; Pigani, L; Foca, G; Vasile Simone, G; Antonelli, A; Ulrici, A

    2018-08-15

    An electronic eye (EE) for fast and easy evaluation of grape phenolic ripening has been developed. For this purpose, berries of different grape varieties were collected at different harvest times from veraison to maturity, then an amount of the derived must was deposited on a white sheet of absorbent paper to obtain a sort of paper chromatography. Thus, RGB images of the must spots were collected using a flatbed scanner and converted into one-dimensional signals, named colourgrams, which codify the colour properties of the images. The dataset of colourgrams was used to build calibration models to relate the colour of the images with the phenolic composition of the samples - determined by reference analytical methods - and therefore to follow the ripening trend. Satisfactory calibration models were obtained for the prediction of the most important parameters related to phenolic ripening of grapes, such as colour index, tonality, total anthocyanins content, malvidin-3-O-glucoside and petunidin-3-O-glucoside. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. A Dynamic Graph Cuts Method with Integrated Multiple Feature Maps for Segmenting Kidneys in 2D Ultrasound Images.

    PubMed

    Zheng, Qiang; Warner, Steven; Tasian, Gregory; Fan, Yong

    2018-02-12

    Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  18. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  19. A combined learning algorithm for prostate segmentation on 3D CT images.

    PubMed

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2017-11-01

    Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.

  20. Multivariate statistical model for 3D image segmentation with application to medical images.

    PubMed

    John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O

    2003-12-01

    In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).

  1. A region-based segmentation of tumour from brain CT images using nonlinear support vector machine classifier.

    PubMed

    Nanthagopal, A Padma; Rajamony, R Sukanesh

    2012-07-01

    The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.

  2. A Review on Segmentation of Positron Emission Tomography Images

    PubMed Central

    Foster, Brent; Bagci, Ulas; Mansoor, Awais; Xu, Ziyue; Mollura, Daniel J.

    2014-01-01

    Positron Emission Tomography (PET), a non-invasive functional imaging method at the molecular level, images the distribution of biologically targeted radiotracers with high sensitivity. PET imaging provides detailed quantitative information about many diseases and is often used to evaluate inflammation, infection, and cancer by detecting emitted photons from a radiotracer localized to abnormal cells. In order to differentiate abnormal tissue from surrounding areas in PET images, image segmentation methods play a vital role; therefore, accurate image segmentation is often necessary for proper disease detection, diagnosis, treatment planning, and follow-ups. In this review paper, we present state-of-the-art PET image segmentation methods, as well as the recent advances in image segmentation techniques. In order to make this manuscript self-contained, we also briefly explain the fundamentals of PET imaging, the challenges of diagnostic PET image analysis, and the effects of these challenges on the segmentation results. PMID:24845019

  3. One year in orbit of the first Geostationary Ocean Colour Imager (GOCI)

    NASA Astrophysics Data System (ADS)

    Faure, François; Coste, Pierre; Benchetrit, Thierry; Kang, Gm Sil; Kim, Han-dol

    2017-11-01

    Geostationary Ocean Colour Imager (GOCI) is the first Ocean Colour Imager to operate from a Geostationary Orbit. It was developed by Astrium SAS under KARI contract in about 3 years between mid 2005 and October 2008 and integrated on-board COMS satellite end 2008 aside the COMS Meteo Imager (MI). COMS satellite was launched in June 2010 and the in-orbit commissioning tests were completed in beginning of 2011. The mission is designed to significantly improve ocean observation in complement with low orbit service by providing high frequency coverage. The GOCI is designed to provide multi-spectral data to detect, monitor, quantify, and predict short-term changes of coastal ocean environment for marine science research and application purpose. Target area for the GOCI observation in the COMS satellite covers a large 2500 x 2500 km2 sea area around the Korean Peninsula, with an average Ground sampling distance (GSD) of 500m, corresponding to a NADIR GSD of 360m. The presentation will shortly recall the mission objectives and major instrument requirements, and then present the results of inorbit testing and validations. All functions and in particular the CMOS detector matrix operate nominally. Performances evaluated in orbit (SNR, MTF, etc.) show results above the requirements. Finally, in-orbit calibrations using the sun diffuser provide very satisfactory consistency with the ground characterisation. GOCI is now delivering operational products and proving the interest of Geo observation in the Ocean Colour applications

  4. A validation framework for brain tumor segmentation.

    PubMed

    Archip, Neculai; Jolesz, Ferenc A; Warfield, Simon K

    2007-10-01

    We introduce a validation framework for the segmentation of brain tumors from magnetic resonance (MR) images. A novel unsupervised semiautomatic brain tumor segmentation algorithm is also presented. The proposed framework consists of 1) T1-weighted MR images of patients with brain tumors, 2) segmentation of brain tumors performed by four independent experts, 3) segmentation of brain tumors generated by a semiautomatic algorithm, and 4) a software tool that estimates the performance of segmentation algorithms. We demonstrate the validation of the novel segmentation algorithm within the proposed framework. We show its performance and compare it with existent segmentation. The image datasets and software are available at http://www.brain-tumor-repository.org/. We present an Internet resource that provides access to MR brain tumor image data and segmentation that can be openly used by the research community. Its purpose is to encourage the development and evaluation of segmentation methods by providing raw test and image data, human expert segmentation results, and methods for comparing segmentation results.

  5. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed

    2013-01-01

    Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137

  6. Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors

    NASA Astrophysics Data System (ADS)

    Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi

    2010-03-01

    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.

  7. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  8. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  9. Segmentation and tracking of anticyclonic eddies during a submarine volcanic eruption using ocean colour imagery.

    PubMed

    Marcello, Javier; Eugenio, Francisco; Estrada-Allis, Sheila; Sangrà, Pablo

    2015-04-14

    The eruptive phase of a submarine volcano located 2 km away from the southern coast of El Hierro Island started on October 2011. This extraordinary event provoked a dramatic perturbation of the water column. In order to understand and quantify the environmental impacts caused, a regular multidisciplinary monitoring was carried out using remote sensing sensors. In this context, we performed the systematic processing of every MODIS and MERIS and selected high resolution Worldview-2 imagery to provide information on the concentration of a number of biological, physical and chemical parameters. On the other hand, the eruption provided an exceptional source of tracer that allowed the study a variety of oceanographic structures. Specifically, the Canary Islands belong to a very active zone of long-lived eddies. Such structures are usually monitored using sea level anomaly fields. However these products have coarse spatial resolution and they are not suitable to perform submesoscale studies. Thanks to the volcanic tracer, detailed studies were undertaken with ocean colour imagery allowing, using the diffuse attenuation coefficient, to monitor the process of filamentation and axisymmetrization predicted by theoretical studies and numerical modelling. In our work, a novel 2-step segmentation methodology has been developed. The approach incorporates different segmentation algorithms and region growing techniques. In particular, the first step obtains an initial eddy segmentation using thresholding or clustering methods and, next, the fine detail is achieved by the iterative identification of the points to grow and the subsequent application of watershed or thresholding strategies. The methodology has demonstrated an excellent performance and robustness and it has proven to properly capture the eddy and its filaments.

  10. Performance evaluation of 2D and 3D deep learning approaches for automatic segmentation of multiple organs on CT images

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2018-02-01

    The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.

  11. A fourth order PDE based fuzzy c- means approach for segmentation of microscopic biopsy images in presence of Poisson noise for cancer detection.

    PubMed

    Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev

    2017-07-01

    For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other segmentation approaches used for cancer detection. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Cellular image segmentation using n-agent cooperative game theory

    NASA Astrophysics Data System (ADS)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  13. Grey-scale and colour Doppler ultrasound versus magnetic resonance imaging for the prenatal diagnosis of placenta accreta.

    PubMed

    Rezk, Mohamed Abd-Allah; Shawky, Mohamed

    2016-01-01

    To assess the effectiveness of grey-scale and colour Doppler ultrasound (US) versus magnetic resonance imaging (MRI) for the prenatal diagnosis of placenta accreta. A prospective observational study including a total of 74 patients with placenta previa and previous uterine scar (n = 74). Grey-scale and colour Doppler US was done followed by MRI by different observers to diagnose adherent placenta. Test validity of US and MRI were calculated. Maternal morbidity and mortality were also assessed. A total of 53 patients confirmed to have placenta accreta at operation. The overall sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of US was 94.34, 91.67, 96.15 and 88% compared to 96.08, 87.50, 94.23 and 91.3% for MRI, respectively. The most relevant US sign was turbulent blood flow by colour Doppler, while dark intra-placental band was the most sensitive MRI sign. Venous thromboembolism (1.3%), bladder injury (29.7%), ureteric injury (18.9%), postoperative fever (10.8%), admission to ICU (50%) and re-operation (31.1%). Placenta accreta can be successfully diagnosed by grey-scale and colour Doppler US. MRI would be more likely suggested for either posteriorly or laterally situated placenta previa in order to exclude placental invasion.

  14. Experimental evidence that primate trichromacy is well suited for detecting primate social colour signals.

    PubMed

    Hiramatsu, Chihiro; Melin, Amanda D; Allen, William L; Dubuc, Constance; Higham, James P

    2017-06-14

    Primate trichromatic colour vision has been hypothesized to be well tuned for detecting variation in facial coloration, which could be due to selection on either signal wavelengths or the sensitivities of the photoreceptors themselves. We provide one of the first empirical tests of this idea by asking whether, when compared with other visual systems, the information obtained through primate trichromatic vision confers an improved ability to detect the changes in facial colour that female macaque monkeys exhibit when they are proceptive. We presented pairs of digital images of faces of the same monkey to human observers and asked them to select the proceptive face. We tested images that simulated what would be seen by common catarrhine trichromatic vision, two additional trichromatic conditions and three dichromatic conditions. Performance under conditions of common catarrhine trichromacy, and trichromacy with narrowly separated LM cone pigments (common in female platyrrhines), was better than for evenly spaced trichromacy or for any of the dichromatic conditions. These results suggest that primate trichromatic colour vision confers excellent ability to detect meaningful variation in primate face colour. This is consistent with the hypothesis that social information detection has acted on either primate signal spectral reflectance or photoreceptor spectral tuning, or both. © 2017 The Authors.

  15. Experimental evidence that primate trichromacy is well suited for detecting primate social colour signals

    PubMed Central

    Higham, James P.

    2017-01-01

    Primate trichromatic colour vision has been hypothesized to be well tuned for detecting variation in facial coloration, which could be due to selection on either signal wavelengths or the sensitivities of the photoreceptors themselves. We provide one of the first empirical tests of this idea by asking whether, when compared with other visual systems, the information obtained through primate trichromatic vision confers an improved ability to detect the changes in facial colour that female macaque monkeys exhibit when they are proceptive. We presented pairs of digital images of faces of the same monkey to human observers and asked them to select the proceptive face. We tested images that simulated what would be seen by common catarrhine trichromatic vision, two additional trichromatic conditions and three dichromatic conditions. Performance under conditions of common catarrhine trichromacy, and trichromacy with narrowly separated LM cone pigments (common in female platyrrhines), was better than for evenly spaced trichromacy or for any of the dichromatic conditions. These results suggest that primate trichromatic colour vision confers excellent ability to detect meaningful variation in primate face colour. This is consistent with the hypothesis that social information detection has acted on either primate signal spectral reflectance or photoreceptor spectral tuning, or both. PMID:28615496

  16. Patient-specific semi-supervised learning for postoperative brain tumor segmentation.

    PubMed

    Meier, Raphael; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2014-01-01

    In contrast to preoperative brain tumor segmentation, the problem of postoperative brain tumor segmentation has been rarely approached so far. We present a fully-automatic segmentation method using multimodal magnetic resonance image data and patient-specific semi-supervised learning. The idea behind our semi-supervised approach is to effectively fuse information from both pre- and postoperative image data of the same patient to improve segmentation of the postoperative image. We pose image segmentation as a classification problem and solve it by adopting a semi-supervised decision forest. The method is evaluated on a cohort of 10 high-grade glioma patients, with segmentation performance and computation time comparable or superior to a state-of-the-art brain tumor segmentation method. Moreover, our results confirm that the inclusion of preoperative MR images lead to a better performance regarding postoperative brain tumor segmentation.

  17. Novel active contour model based on multi-variate local Gaussian distribution for local segmentation of MR brain images

    NASA Astrophysics Data System (ADS)

    Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong

    2017-12-01

    Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.

  18. Efficient threshold for volumetric segmentation

    NASA Astrophysics Data System (ADS)

    Burdescu, Dumitru D.; Brezovan, Marius; Stanescu, Liana; Stoica Spahiu, Cosmin; Ebanca, Daniel

    2015-07-01

    Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.

  19. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  20. A Segmentation Method for Lung Parenchyma Image Sequences Based on Superpixels and a Self-Generating Neural Forest

    PubMed Central

    Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang

    2016-01-01

    Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214

  1. A kind of color image segmentation algorithm based on super-pixel and PCNN

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  2. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    NASA Astrophysics Data System (ADS)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  3. Building Roof Segmentation from Aerial Images Using a Line-and Region-Based Watershed Segmentation Technique

    PubMed Central

    Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja

    2015-01-01

    In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706

  4. Study on the application of MRF and the D-S theory to image segmentation of the human brain and quantitative analysis of the brain tissue

    NASA Astrophysics Data System (ADS)

    Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang

    2012-01-01

    The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.

  5. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  6. Minor Planet Science with the VISTA Hemisphere Survey

    NASA Astrophysics Data System (ADS)

    Popescu, M.; Licandro, J.; Morate, D.; de León, J.; Nedelcu, D. A.

    2017-03-01

    We have carried out a serendipitous search for Solar System objects imaged by the VISTA Hemisphere Survey (VHS) and have identified 230 375 valid detections for 39 947 objects. This information is available in three catalogues, entitled MOVIS. The distributions of the data in colour-colour plots show clusters identified with the different taxonomic asteroid types. Diagrams that use (Y-J) colour separate the spectral classes more effectively than any other method based on colours. In particular, the end-class members A-, D-, R-, and V-types occupy well-defined regions and can be easily identified. About 10 000 asteroids were classified taxonomically using a probabilistic approach. The distribution of basaltic asteroids across the Main Belt was characterised using the MOVIS colours: 477 V-type candidates were found, of which 244 are outside the Vesta dynamical family.

  7. Perception of Safety and Liking Associated to the Colour Intervention of Bike Lanes: Contribution from the Behavioural Sciences to Urban Design and Wellbeing

    PubMed Central

    Vera-Villarroel, Pablo; Contreras, Daniela; Lillo, Sebastián; Segovia, Ariel; Rojo, Natalia; Moreno, Sandra; Oyarzo, Francisco

    2016-01-01

    The perception of colour and its subjective effects are key issues to designing safe and enjoyable bike lanes. This paper addresses the relationship between the colours of bike lane interventions—in particular pavement painting and intersection design—and the subjective evaluation of liking, visual saliency, and perceived safety related to such an intervention. Utilising images of three real bike lane intersections modified by software to change their colour (five in total), this study recruited 538 participants to assess their perception of all fifteen colour-design combinations. A multivariate analysis of covariance (MANCOVA) with the Bonferroni post hoc test was performed to assess the effect of the main conditions (colour and design) on the dependent variables (liking towards the intervention, level of visual saliency of the intersection, and perceived safety of the bike lane). The results showed that the colour red was more positively associated to the outcome variables, followed by yellow and blue. Additionally, it was observed that the effect of colour widely outweighs the effect of design, suggesting that the right choice and use of colour would increase the effectiveness on bike-lanes pavement interventions. Limitations and future directions are discussed. PMID:27548562

  8. Towards Automatic Image Segmentation Using Optimised Region Growing Technique

    NASA Astrophysics Data System (ADS)

    Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi

    Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.

  9. A NDVI assisted remote sensing image adaptive scale segmentation method

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  10. Testosterone-Induced Expression of Male Colour Morphs in Females of the Polymorphic Tawny Dragon Lizard, Ctenophorus decresii.

    PubMed

    Rankin, Katrina; Stuart-Fox, Devi

    2015-01-01

    Many colour polymorphisms are present only in one sex, usually males, but proximate mechanisms controlling the expression of sex-limited colour polymorphisms have received little attention. Here, we test the hypothesis that artificial elevation of testosterone in females of the colour polymorphic tawny dragon lizard, Ctenophorus decresii, can induce them to express the same colour morphs, in similar frequencies, to those found in males. Male C. decresii, express four discrete throat colour morphs (orange, yellow, grey and an orange central patch surrounded by yellow). We used silastic implants to experimentally elevate testosterone levels in mature females to induce colour expression. Testosterone elevation resulted in a substantial increase in the proportion and intensity of orange but not yellow colouration, which was present in a subset of females prior to treatment. Consequently, females exhibited the same set of colour morphs as males, and we confirmed that these morphs are objectively classifiable, by using digital image analyses and spectral reflectance measurements, and occur in similar frequencies as in males. These results indicate that the influence of testosterone differs for different colours, suggesting that their expression may be governed by different proximate hormonal mechanisms. Thus, caution must be exercised when using artificial testosterone manipulation to induce female expression of sex-limited colour polymorphisms. Nevertheless, the ability to express sex-limited colours (in this case orange) to reveal the same, objectively classifiable morphs in similar frequencies to males suggests autosomal rather than sex-linked inheritance, and can facilitate further research on the genetic basis of colour polymorphism, including estimating heritability and selection on colour morphs from pedigree data.

  11. Testosterone-Induced Expression of Male Colour Morphs in Females of the Polymorphic Tawny Dragon Lizard, Ctenophorus decresii

    PubMed Central

    Rankin, Katrina; Stuart-Fox, Devi

    2015-01-01

    Many colour polymorphisms are present only in one sex, usually males, but proximate mechanisms controlling the expression of sex-limited colour polymorphisms have received little attention. Here, we test the hypothesis that artificial elevation of testosterone in females of the colour polymorphic tawny dragon lizard, Ctenophorus decresii, can induce them to express the same colour morphs, in similar frequencies, to those found in males. Male C. decresii, express four discrete throat colour morphs (orange, yellow, grey and an orange central patch surrounded by yellow). We used silastic implants to experimentally elevate testosterone levels in mature females to induce colour expression. Testosterone elevation resulted in a substantial increase in the proportion and intensity of orange but not yellow colouration, which was present in a subset of females prior to treatment. Consequently, females exhibited the same set of colour morphs as males, and we confirmed that these morphs are objectively classifiable, by using digital image analyses and spectral reflectance measurements, and occur in similar frequencies as in males. These results indicate that the influence of testosterone differs for different colours, suggesting that their expression may be governed by different proximate hormonal mechanisms. Thus, caution must be exercised when using artificial testosterone manipulation to induce female expression of sex-limited colour polymorphisms. Nevertheless, the ability to express sex-limited colours (in this case orange) to reveal the same, objectively classifiable morphs in similar frequencies to males suggests autosomal rather than sex-linked inheritance, and can facilitate further research on the genetic basis of colour polymorphism, including estimating heritability and selection on colour morphs from pedigree data. PMID:26485705

  12. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  13. Digital Image Sensor-Based Assessment of the Status of Oat (Avena sativa L.) Crops after Frost Damage

    PubMed Central

    Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro

    2011-01-01

    The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu’s method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production. PMID:22163940

  14. Digital image sensor-based assessment of the status of oat (Avena sativa L.) crops after frost damage.

    PubMed

    Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro

    2011-01-01

    The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu's method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production.

  15. MRI Segmentation of the Human Brain: Challenges, Methods, and Applications

    PubMed Central

    Despotović, Ivana

    2015-01-01

    Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121

  16. The effect of warp tension on the colour of jacquard fabric made with different weaves structures

    NASA Astrophysics Data System (ADS)

    Karnoub, A.; Kadi, N.; Holmudd, O.; Peterson, J.; Skrifvars, M.

    2017-10-01

    The aims of this paper is to demonstrate the effect of warp tension on fabric colour for several types of weaves structures, and found a relationship between them. The image analyse technique used to determine the proportion of yarns colour appearance, the advantage of this techniques is the rapidity and reliability. The woven fabric samples are consisting of a polyester warp yarn with continuous filaments and density of 33 end/cm, a polypropylene weft yarn with a density of 24 pick/cm, and the warp tension ranged between 12-22 cN/tex. The experimental results demonstrated the effect of the warp tension on the colour of fabric, and this effect is related to several factors, where the large proportion of warp appearance leads to larger effect on fabric colour. The difference in the value of colour differences ΔEcmc is larger is in the range 16 to 20 cN/tex of warp tension. Using statistical methods, a mathematical model to calculate the amount of the colour difference ΔEcmc caused by the change in warp tension had been proposed.

  17. Corpus callosum segmentation using deep neural networks with prior information from multi-atlas images

    NASA Astrophysics Data System (ADS)

    Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min

    2018-03-01

    In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.

  18. Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method.

    PubMed

    Han, Dongfeng; Bayouth, John; Song, Qi; Taurani, Aakant; Sonka, Milan; Buatti, John; Wu, Xiaodong

    2011-01-01

    Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.

  19. Method to acquire regions of fruit, branch and leaf from image of red apple in orchard

    NASA Astrophysics Data System (ADS)

    Lv, Jidong; Xu, Liming

    2017-07-01

    This work proposed a method to acquire regions of fruit, branch and leaf from red apple image in orchard. To acquire fruit image, R-G image was extracted from the RGB image for corrosive working, hole filling, subregion removal, expansive working and opening operation in order. Finally, fruit image was acquired by threshold segmentation. To acquire leaf image, fruit image was subtracted from RGB image before extracting 2G-R-B image. Then, leaf image was acquired by subregion removal and threshold segmentation. To acquire branch image, dynamic threshold segmentation was conducted in the R-G image. Then, the segmented image was added to fruit image to acquire adding fruit image which was subtracted from RGB image with leaf image. Finally, branch image was acquired by opening operation, subregion removal and threshold segmentation after extracting the R-G image from the subtracting image. Compared with previous methods, more complete image of fruit, leaf and branch can be acquired from red apple image with this method.

  20. How Bees Discriminate a Pattern of Two Colours from Its Mirror Image

    PubMed Central

    Horridge, Adrian

    2015-01-01

    A century ago, in his study of colour vision in the honeybee (Apis mellifera), Karl von Frisch showed that bees distinguish between a disc that is half yellow, half blue, and a mirror image of the same. Although his inference of colour vision in this example has been accepted, some discrepancies have prompted a new investigation of the detection of polarity in coloured patterns. In new experiments, bees restricted to their blue and green receptors by exclusion of ultraviolet could learn patterns of this type if they displayed a difference in green contrast between the two colours. Patterns with no green contrast required an additional vertical black line as a landmark. Tests of the trained bees revealed that they had learned two inputs; a measure and the retinotopic position of blue with large field tonic detectors, and the measure and position of a vertical edge or line with small-field phasic green detectors. The angle between these two was measured. This simple combination was detected wherever it occurred in many patterns, fitting the definition of an algorithm, which is defined as a method of processing data. As long as they excited blue receptors, colours could be any colour to human eyes, even white. The blue area cue could be separated from the green receptor modulation by as much as 50°. When some blue content was not available, the bees learned two measures of the modulation of the green receptors at widely separated vertical edges, and the angle between them. There was no evidence that the bees reconstructed the lay-out of the pattern or detected a tonic input to the green receptors. PMID:25617892

  1. Equal Insistence of Proportion of Colour on a 2D Surface

    NASA Astrophysics Data System (ADS)

    Staig-Graham, B. N.

    2006-06-01

    Katz conducted experiments on Insistence and Equal Insistence, using an episcotister, chromatic, and achromatic papers which he viewed under different intensities of a light sources and chromatic illumination. His principle of Equal Insistence, combined with Goethe's reputed proportions of surface colours according to their luminosity, and Strzeminski's concept of Unism in painting inspire the author's current painting practice. However, a whole new route of research has been opened by the introduction of Time as a phenomenon of Equal Insitence and Image Perception Fading, under contolled conditions of observer movement at different distances, viewing angles, and illumination. Visual knowledge of Equal Insistence indicates, so far, several apparent changes to the properties of surface colours, and its actual effect upon the shape and size of paintings and symbolism. Typical of the investigation are the achromatic images of an elephant and a mouse.

  2. Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue

    NASA Astrophysics Data System (ADS)

    Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.

    2018-02-01

    Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.

  3. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  4. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  5. Data path design and image quality aspects of the next generation multifunctional printer

    NASA Astrophysics Data System (ADS)

    Brassé, M. H. H.; de Smet, S. P. R. C.

    2008-01-01

    Multifunctional devices (MFDs) are increasingly used as a document hub. The MFD is used as a copier, scanner, printer, and it facilitates digital document distribution and sharing. This imposes new requirements on the design of the data path and its image processing. Various design aspects need to be taken into account, including system performance, features, image quality, and cost price. A good balance is required in order to develop a competitive MFD. A modular datapath architecture is presented that supports all the envisaged use cases. Besides copying, colour scanning is becoming an important use case of a modern MFD. The copy-path use case is described and it is shown how colour scanning can also be supported with a minimal adaptation to the architecture. The key idea is to convert the scanner data to an opponent colour space representation at the beginning of the image processing pipeline. The sub-sampling of chromatic information allows for the saving of scarce hardware resources without significant perceptual loss of quality. In particular, we have shown that functional FPGA modules from the copy application can also be used for the scan-to-file application. This makes the presented approach very cost-effective while complying with market conform image quality standards.

  6. A spectral k-means approach to bright-field cell image segmentation.

    PubMed

    Bradbury, Laura; Wan, Justin W L

    2010-01-01

    Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.

  7. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  8. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation.

    PubMed

    Zhao, Xiaomei; Wu, Yihong; Song, Guidong; Li, Zhenye; Zhang, Yazhuo; Fan, Yong

    2018-01-01

    Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Keeping It in Three Dimensions: Measuring the Development of Mental Rotation in Children with the Rotated Colour Cube Test (RCCT).

    PubMed

    Lütke, Nikolay; Lange-Küttner, Christiane

    2015-08-03

    This study introduces the new Rotated Colour Cube Test (RCCT) as a measure of object identification and mental rotation using single 3D colour cube images in a matching-to-sample procedure. One hundred 7- to 11-year-old children were tested with aligned or rotated cube models, distracters and targets. While different orientations of distracters made the RCCT more difficult, different colours of distracters had the opposite effect and made the RCCT easier because colour facilitated clearer discrimination between target and distracters. Ten-year-olds performed significantly better than 7- to 8-year-olds. The RCCT significantly correlated with children's performance on the Raven's Coloured Progressive Matrices Test (RCPM) presumably due to the shared multiple-choice format, but the RCCT was easier, as it did not require sequencing. Children from families with a high socio-economic status performed best on both tests, with boys outperforming girls on the more difficult RCCT test sections.

  10. Type of packaging affects the colour stability of vitamin E enriched beef.

    PubMed

    Nassu, Renata T; Uttaro, Bethany; Aalhus, Jennifer L; Zawadski, Sophie; Juárez, Manuel; Dugan, Michael E R

    2012-12-01

    Colour stability is a very important parameter for meat retail display, as appearance of the product is the deciding factor for consumers at time of purchase. This study investigated the possibility of extending appearance shelf-life through the combined use of packaging method (overwrapping - OVER, modified atmosphere - MAP, vacuum skin packaging - VSP and a combination of modified atmosphere and vacuum skin packaging - MAPVSP) and antioxidants (vitamin E enriched beef). Retail attributes (appearance, lean colour, % surface discolouration), as well as colour space analysis of images for red, green and blue parameters were measured over 18days. MAPVSP provided the most desirable retail appearance during the first 4days of retail display, while VSP-HB had the best colour stability. Overall, packaging type was more influential than α-tocopherol levels on meat colour stability, although α-tocopherol levels (>4μgg(-1) meat) had a protective effect when using high oxygen packaging methods. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  11. Integrated circuit layer image segmentation

    NASA Astrophysics Data System (ADS)

    Masalskis, Giedrius; Petrauskas, Romas

    2010-09-01

    In this paper we present IC layer image segmentation techniques which are specifically created for precise metal layer feature extraction. During our research we used many samples of real-life de-processed IC metal layer images which were obtained using optical light microscope. We have created sequence of various image processing filters which provides segmentation results of good enough precision for our application. Filter sequences were fine tuned to provide best possible results depending on properties of IC manufacturing process and imaging technology. Proposed IC image segmentation filter sequences were experimentally tested and compared with conventional direct segmentation algorithms.

  12. Frequential versus spatial colour textons for breast TMA classification.

    PubMed

    Fernández-Carrobles, M Milagro; Bueno, Gloria; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial; Gonzández-López, Lucía

    2015-06-01

    Advances in digital pathology are generating huge volumes of whole slide (WSI) and tissue microarray images (TMA) which are providing new insights into the causes of cancer. The challenge is to extract and process effectively all the information in order to characterize all the heterogeneous tissue-derived data. This study aims to identify an optimal set of features that best separates different classes in breast TMA. These classes are: stroma, adipose tissue, benign and benign anomalous structures and ductal and lobular carcinomas. To this end, we propose an exhaustive assessment on the utility of textons and colour for automatic classification of breast TMA. Frequential and spatial texton maps from eight different colour models were extracted and compared. Then, in a novel way, the TMA is characterized by the 1st and 2nd order Haralick statistical descriptors obtained from the texton maps with a total of 241 × 8 features for each original RGB image. Subsequently, a feature selection process is performed to remove redundant information and therefore to reduce the dimensionality of the feature vector. Three methods were evaluated: linear discriminant analysis, correlation and sequential forward search. Finally, an extended bank of classifiers composed of six techniques was compared, but only three of them could significantly improve accuracy rates: Fisher, Bagging Trees and AdaBoost. Our results reveal that the combination of different colour models applied to spatial texton maps provides the most efficient representation of the breast TMA. Specifically, we found that the best colour model combination is Hb, Luv and SCT for all classifiers and the classifier that performs best for all colour model combinations is the AdaBoost. On a database comprising 628 TMA images, classification yields an accuracy of 98.1% and a precision of 96.2% with a total of 316 features on spatial textons maps. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  14. Medical image segmentation using 3D MRI data

    NASA Astrophysics Data System (ADS)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  15. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  16. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    PubMed Central

    Luo, Yaozhong; Liu, Longzhong; Li, Xuelong

    2017-01-01

    Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%). PMID:28536703

  17. Spectral heterogeneity on Phobos and Deimos: HiRISE observations and comparisons to Mars Pathfinder results

    USGS Publications Warehouse

    Thomas, N.; Stelter, R.; Ivanov, A.; Bridges, N.T.; Herkenhoff, K. E.; McEwen, A.S.

    2011-01-01

    The High-Resolution Imaging Science Experiment (HiRISE) onboard Mars Reconnaissance Orbiter (MRO) has been used to observe Phobos and Deimos at spatial scales of around 6 and 20 m/px, respectively. HiRISE (McEwen et al.; JGR, 112, CiteID E05S02, DOI: 10.1029/2005JE002605, 2007) has provided, for the first time, high-resolution colour images of the surfaces of the Martian moons. When processed, by the production of colour ratio images for example, the data show considerable small-scale heterogeneity, which might be attributable to fresh impacts exposing different materials otherwise largely hidden by a homogenous regolith. The bluer material that is draped over the south-eastern rim of the largest crater on Phobos, Stickney, has been perforated by an impact to reveal redder material and must therefore be relatively thin. A fresh impact with dark crater rays has been identified. Previously identified mass-wasting features in Stickney and Limtoc craters stand out strongly in colour. The interior deposits in Stickney appear more inhomogeneous than previously suspected. Several other local colour variations are also evident. Deimos is more uniform in colour but does show some small-scale inhomogeneity. The bright streamers (Thomas et al.; Icarus, 123, 536556,1996) are relatively blue. One crater to the south-west of Voltaire and its surroundings appear quite strongly reddened with respect to the rest of the surface. The reddening of the surroundings may be the result of ejecta from this impact. The spectral gradients at optical wavelengths observed for both Phobos and Deimos are quantitatively in good agreement with those found by unresolved photometric observations made by the Imager for Mars Pathfinder (IMP; Thomas et al.; JGR, 104, 90559068, 1999). The spectral gradients of the blue and red units on Phobos bracket the results from IMP. ?? 2010 Elsevier Ltd. All rights reserved.

  18. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  19. Segmentation of medical images using explicit anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee

    1999-07-01

    Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.

  20. The value of digital imaging in diabetic retinopathy.

    PubMed

    Sharp, P F; Olson, J; Strachan, F; Hipwell, J; Ludbrook, A; O'Donnell, M; Wallace, S; Goatman, K; Grant, A; Waugh, N; McHardy, K; Forrester, J V

    2003-01-01

    To assess the performance of digital imaging, compared with other modalities, in screening for and monitoring the development of diabetic retinopathy. All imaging was acquired at a hospital assessment clinic. Subsequently, study optometrists examined the patients in their own premises. A subset of patients also had fluorescein angiography performed every 6 months. Research clinic at the hospital eye clinic and optometrists' own premises. Study comprised 103 patients who had type 1 diabetes mellitus, 481 had type 2 diabetes mellitus and two had secondary diabetes mellitus; 157 (26.8%) had some form of retinopathy ('any') and 58 (9.9%) had referable retinopathy. A repeat assessment was carried out of all patients 1 year after their initial assessment. Patients who had more severe forms of retinopathy were monitored more frequently for evidence of progression. Detection of retinopathy, progression of retinopathy and determination of when treatment is required. Manual grading of 35-mm colour slides produced the highest sensitivity and specificity figures, with optometrist examination recording most false negatives. Manual and automated analysis of digital images had intermediate sensitivity. Both manual grading of 35-mm colour slides and digital images gave sensitivities of over 90% with few false positives. Digital imaging produced 50% fewer ungradable images than colour slides. This part of the study was limited as patients with the more severe levels of retinopathy opted for treatment. There was an increase in the number of microaneurysms in those patients who developed from mild to moderate. There was no difference between the turnover rate of either new or regressed microaneurysms for patients with mild or with sight-threatening retinopathy. It was not possible in this study to ascertain whether digital imaging systems determine when treatment is warranted. In the context of a national screening programme for referable retinopathy, digital imaging is an effective method. In addition, technical failure rates are lower with digital imaging than conventional photography. Digital imaging is also a more sensitive technique than slit-lamp examination by optometrists. Automated grading can improve efficiency by correctly identifying just under half the population as having no retinopathy. Recommendations for future research include: investigating whether the nasal field is required for grading; a large screening programme is required to ascertain if automated grading can safely perform as a first-level grader; if colour improves the performance of grading digital images; investigating methods to ensure effective uptake in a diabetic retinopathy screening programme.

  1. Geometric and Colour Data Fusion for Outdoor 3D Models

    PubMed Central

    Merchán, Pilar; Adán, Antonio; Salamanca, Santiago; Domínguez, Vicente; Chacón, Ricardo

    2012-01-01

    This paper deals with the generation of accurate, dense and coloured 3D models of outdoor scenarios from scanners. This is a challenging research field in which several problems still remain unsolved. In particular, the process of 3D model creation in outdoor scenes may be inefficient if the scene is digitalized under unsuitable technical (specific scanner on-board camera) and environmental (rain, dampness, changing illumination) conditions. We address our research towards the integration of images and range data to produce photorealistic models. Our proposal is based on decoupling the colour integration and geometry reconstruction stages, making them independent and controlled processes. This issue is approached from two different viewpoints. On the one hand, given a complete model (geometry plus texture), we propose a method to modify the original texture provided by the scanner on-board camera with the colour information extracted from external images taken at given moments and under specific environmental conditions. On the other hand, we propose an algorithm to directly assign external images onto the complete geometric model, thus avoiding tedious on-line calibration processes. We present the work conducted on two large Roman archaeological sites dating from the first century A.D., namely, the Theatre of Segobriga and the Fori Porticus of Emerita Augusta, both in Spain. The results obtained demonstrate that our approach could be useful in the digitalization and 3D modelling fields. PMID:22969327

  2. WE-EF-210-08: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Rossi, P; Jani, A

    Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage.more » During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful tool for image-guided interventions in prostate-cancer diagnosis and treatment. This research is supported in part by DOD PCRP Award W81XWH-13-1-0269, and National Cancer Institute (NCI) Grant CA114313.« less

  3. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  4. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.

  5. User-guided segmentation for volumetric retinal optical coherence tomography images

    PubMed Central

    Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.

    2014-01-01

    Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962

  6. User-guided segmentation for volumetric retinal optical coherence tomography images.

    PubMed

    Yin, Xin; Chao, Jennifer R; Wang, Ruikang K

    2014-08-01

    Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.

  7. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  8. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images

    PubMed Central

    Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng

    2015-01-01

    Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315

  9. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei

    2017-02-01

    Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.

  10. Clustering approach for unsupervised segmentation of malarial Plasmodium vivax parasite

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida

    2017-10-01

    Malaria is a global health problem, particularly in Africa and south Asia where it causes countless deaths and morbidity cases. Efficient control and prompt of this disease require early detection and accurate diagnosis due to the large number of cases reported yearly. To achieve this aim, this paper proposes an image segmentation approach via unsupervised pixel segmentation of malaria parasite to automate the diagnosis of malaria. In this study, a modified clustering algorithm namely enhanced k-means (EKM) clustering, is proposed for malaria image segmentation. In the proposed EKM clustering, the concept of variance and a new version of transferring process for clustered members are used to assist the assignation of data to the proper centre during the process of clustering, so that good segmented malaria image can be generated. The effectiveness of the proposed EKM clustering has been analyzed qualitatively and quantitatively by comparing this algorithm with two popular image segmentation techniques namely Otsu's thresholding and k-means clustering. The experimental results show that the proposed EKM clustering has successfully segmented 100 malaria images of P. vivax species with segmentation accuracy, sensitivity and specificity of 99.20%, 87.53% and 99.58%, respectively. Hence, the proposed EKM clustering can be considered as an image segmentation tool for segmenting the malaria images.

  11. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  12. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  13. Impact of CT perfusion imaging on the assessment of peripheral chronic pulmonary thromboembolism: clinical experience in 62 patients.

    PubMed

    Le Faivre, Julien; Duhamel, Alain; Khung, Suonita; Faivre, Jean-Baptiste; Lamblin, Nicolas; Remy, Jacques; Remy-Jardin, Martine

    2016-11-01

    To evaluate the impact of CT perfusion imaging on the detection of peripheral chronic pulmonary embolisms (CPE). 62 patients underwent a dual-energy chest CT angiographic examination with (a) reconstruction of diagnostic and perfusion images; (b) enabling depiction of vascular features of peripheral CPE on diagnostic images and perfusion defects (20 segments/patient; total: 1240 segments examined). The interpretation of diagnostic images was of two types: (a) standard (i.e., based on cross-sectional images alone) or (b) detailed (i.e., based on cross-sectional images and MIPs). The segment-based analysis showed (a) 1179 segments analyzable on both imaging modalities and 61 segments rated as nonanalyzable on perfusion images; (b) the percentage of diseased segments was increased by 7.2 % when perfusion imaging was compared to the detailed reading of diagnostic images, and by 26.6 % when compared to the standard reading of images. At a patient level, the extent of peripheral CPE was higher on perfusion imaging, with a greater impact when compared to the standard reading of diagnostic images (number of patients with a greater number of diseased segments: n = 45; 72.6 % of the study population). Perfusion imaging allows recognition of a greater extent of peripheral CPE compared to diagnostic imaging. • Dual-energy computed tomography generates standard diagnostic imaging and lung perfusion analysis. • Depiction of CPE on central arteries relies on standard diagnostic imaging. • Detection of peripheral CPE is improved by perfusion imaging.

  14. Denoising and segmentation of retinal layers in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Dash, Puspita; Sigappi, A. N.

    2018-04-01

    Optical Coherence Tomography (OCT) is an imaging technique used to localize the intra-retinal boundaries for the diagnostics of macular diseases. Due to speckle noise, low image contrast and accurate segmentation of individual retinal layers is difficult. Due to this, a method for retinal layer segmentation from OCT images is presented. This paper proposes a pre-processing filtering approach for denoising and segmentation methods for segmenting retinal layers OCT images using graph based segmentation technique. These techniques are used for segmentation of retinal layers for normal as well as patients with Diabetic Macular Edema. The algorithm based on gradient information and shortest path search is applied to optimize the edge selection. In this paper the four main layers of the retina are segmented namely Internal limiting membrane (ILM), Retinal pigment epithelium (RPE), Inner nuclear layer (INL) and Outer nuclear layer (ONL). The proposed method is applied on a database of OCT images of both ten normal and twenty DME affected patients and the results are found to be promising.

  15. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  16. A novel multiphoton microscopy images segmentation method based on superpixel and watershed.

    PubMed

    Wu, Weilin; Lin, Jinyong; Wang, Shu; Li, Yan; Liu, Mingyu; Liu, Gaoqiang; Cai, Jianyong; Chen, Guannan; Chen, Rong

    2017-04-01

    Multiphoton microscopy (MPM) imaging technique based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) shows fantastic performance for biological imaging. The automatic segmentation of cellular architectural properties for biomedical diagnosis based on MPM images is still a challenging issue. A novel multiphoton microscopy images segmentation method based on superpixels and watershed (MSW) is presented here to provide good segmentation results for MPM images. The proposed method uses SLIC superpixels instead of pixels to analyze MPM images for the first time. The superpixels segmentation based on a new distance metric combined with spatial, CIE Lab color space and phase congruency features, divides the images into patches which keep the details of the cell boundaries. Then the superpixels are used to reconstruct new images by defining an average value of superpixels as image pixels intensity level. Finally, the marker-controlled watershed is utilized to segment the cell boundaries from the reconstructed images. Experimental results show that cellular boundaries can be extracted from MPM images by MSW with higher accuracy and robustness. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Magnetic resonance brain tissue segmentation based on sparse representations

    NASA Astrophysics Data System (ADS)

    Rueda, Andrea

    2015-12-01

    Segmentation or delineation of specific organs and structures in medical images is an important task in the clinical diagnosis and treatment, since it allows to characterize pathologies through imaging measures (biomarkers). In brain imaging, segmentation of main tissues or specific structures is challenging, due to the anatomic variability and complexity, and the presence of image artifacts (noise, intensity inhomogeneities, partial volume effect). In this paper, an automatic segmentation strategy is proposed, based on sparse representations and coupled dictionaries. Image intensity patterns are singly related to tissue labels at the level of small patches, gathering this information in coupled intensity/segmentation dictionaries. This dictionaries are used within a sparse representation framework to find the projection of a new intensity image onto the intensity dictionary, and the same projection can be used with the segmentation dictionary to estimate the corresponding segmentation. Preliminary results obtained with two publicly available datasets suggest that the proposal is capable of estimating adequate segmentations for gray matter (GM) and white matter (WM) tissues, with an average overlapping of 0:79 for GM and 0:71 for WM (with respect to original segmentations).

  18. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  19. Gebiss: an ImageJ plugin for the specification of ground truth and the performance evaluation of 3D segmentation algorithms

    PubMed Central

    2011-01-01

    Background Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss/. PMID:21668958

  20. A Pulse Coupled Neural Network Segmentation Algorithm for Reflectance Confocal Images of Epithelial Tissue

    PubMed Central

    Malik, Bilal H.; Jabbour, Joey M.; Maitland, Kristen C.

    2015-01-01

    Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard. PMID:25816131

  1. Pulse Coupled Neural Networks for the Segmentation of Magnetic Resonance Brain Images.

    DTIC Science & Technology

    1996-12-01

    PULSE COUPLED NEURAL NETWORKS FOR THE SEGMENTATION OF MAGNETIC RESONANCE BRAIN IMAGES THESIS Shane Lee Abrahamson First Lieutenant, USAF AFIT/GCS/ENG...COUPLED NEURAL NETWORKS FOR THE SEGMENTATION OF MAGNETIC RESONANCE BRAIN IMAGES THESIS Shane Lee Abrahamson First Lieutenant, USAF AFIT/GCS/ENG/96D-01...research develops an automated method for segmenting Magnetic Resonance (MR) brain images based on Pulse Coupled Neural Networks (PCNN). MR brain image

  2. Automated choroid segmentation of three-dimensional SD-OCT images by incorporating EDI-OCT images.

    PubMed

    Chen, Qiang; Niu, Sijie; Fang, Wangyi; Shuai, Yuanlu; Fan, Wen; Yuan, Songtao; Liu, Qinghuai

    2018-05-01

    The measurement of choroidal volume is more related with eye diseases than choroidal thickness, because the choroidal volume can reflect the diseases comprehensively. The purpose is to automatically segment choroid for three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. We present a novel choroid segmentation strategy for SD-OCT images by incorporating the enhanced depth imaging OCT (EDI-OCT) images. The down boundary of the choroid, namely choroid-sclera junction (CSJ), is almost invisible in SD-OCT images, while visible in EDI-OCT images. During the SD-OCT imaging, the EDI-OCT images can be generated for the same eye. Thus, we present an EDI-OCT-driven choroid segmentation method for SD-OCT images, where the choroid segmentation results of the EDI-OCT images are used to estimate the average choroidal thickness and to improve the construction of the CSJ feature space of the SD-OCT images. We also present a whole registration method between EDI-OCT and SD-OCT images based on retinal thickness and Bruch's Membrane (BM) position. The CSJ surface is obtained with a 3D graph search in the CSJ feature space. Experimental results with 768 images (6 cubes, 128 B-scan images for each cube) from 2 healthy persons, 2 age-related macular degeneration (AMD) and 2 diabetic retinopathy (DR) patients, and 210 B-scan images from other 8 healthy persons and 21 patients demonstrate that our method can achieve high segmentation accuracy. The mean choroid volume difference and overlap ratio for 6 cubes between our proposed method and outlines drawn by experts were -1.96µm3 and 88.56%, respectively. Our method is effective for the 3D choroid segmentation of SD-OCT images because the segmentation accuracy and stability are compared with the manual segmentation. Copyright © 2017. Published by Elsevier B.V.

  3. The influence of tooth colour on the perceptions of personal characteristics among female dental patients: comparisons of unmodified, decayed and 'whitened' teeth.

    PubMed

    Kershaw, S; Newton, J T; Williams, D M

    2008-03-08

    Physical appearance plays a key role in human social interaction and the smile and teeth are important features in determining the attractiveness of a face. Furthermore, the mouth is thought to be important in social interactions. The purpose of this study was to determine the relationship between tooth colour and social perceptions. Cross-sectional survey. One hundred and eighty female participants viewed one of six images, either a male or a female digitally altered to display one of three possible dental statuses (unmodified, decayed, or whitened). The images were rated on four personality traits: social competence (SC), intellectual ability (IA), psychological adjustment (PA), and relationship satisfaction (RS). Decayed dental appearance led to more negative judgements over the four personality categories. Whitened teeth led to more positive appraisals. The gender of the image and the demographic background of the participant did not have a significant effect on appraisals. Tooth colour exerts an influence on social perceptions. The results may be explained by negative beliefs about dental decay, such as its link with poor oral hygiene.

  4. What you say matters: exploring visual-verbal interactions in visual working memory.

    PubMed

    Mate, Judit; Allen, Richard J; Baqués, Josep

    2012-01-01

    The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.

  5. Smartphone-based analysis of biochemical tests for health monitoring support at home.

    PubMed

    Velikova, Marina; Smeets, Ruben L; van Scheltinga, Josien Terwisscha; Lucas, Peter J F; Spaanderman, Marc

    2014-09-01

    In the context of home-based healthcare monitoring systems, it is desirable that the results obtained from biochemical tests - tests of various body fluids such as blood and urine - are objective and automatically generated to reduce the number of man-made errors. The authors present the StripTest reader - an innovative smartphone-based interpreter of biochemical tests based on paper-based strip colour using image processing techniques. The working principles of the reader include image acquisition of the colour strip pads using the camera phone, analysing the images within the phone and comparing them with reference colours provided by the manufacturer to obtain the test result. The detection of kidney damage was used as a scenario to illustrate the application of, and test, the StripTest reader. An extensive evaluation using laboratory and human urine samples demonstrates the reader's accuracy and precision of detection, indicating the successful development of a cheap, mobile and smart reader for home-monitoring of kidney functioning, which can facilitate the early detection of health problems and a timely treatment intervention.

  6. Joint multi-object registration and segmentation of left and right cardiac ventricles in 4D cine MRI

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Kepp, Timo; Schmidt-Richberg, Alexander; Handels, Heinz

    2014-03-01

    The diagnosis of cardiac function based on cine MRI requires the segmentation of cardiac structures in the images, but the problem of automatic cardiac segmentation is still open, due to the imaging characteristics of cardiac MR images and the anatomical variability of the heart. In this paper, we present a variational framework for joint segmentation and registration of multiple structures of the heart. To enable the simultaneous segmentation and registration of multiple objects, a shape prior term is introduced into a region competition approach for multi-object level set segmentation. The proposed algorithm is applied for simultaneous segmentation of the myocardium as well as the left and right ventricular blood pool in short axis cine MRI images. Two experiments are performed: first, intra-patient 4D segmentation with a given initial segmentation for one time-point in a 4D sequence, and second, a multi-atlas segmentation strategy is applied to unseen patient data. Evaluation of segmentation accuracy is done by overlap coefficients and surface distances. An evaluation based on clinical 4D cine MRI images of 25 patients shows the benefit of the combined approach compared to sole registration and sole segmentation.

  7. GPU accelerated fuzzy connected image segmentation by using CUDA.

    PubMed

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  8. Proper use of colour schemes for image data visualization

    NASA Astrophysics Data System (ADS)

    Vozenilek, Vit; Vondrakova, Alena

    2018-04-01

    With the development of information and communication technologies, new technologies are leading to an exponential increase in the volume and types of data available. At this time of the information society, data is one of the most important arguments for policy making, crisis management, research and education, and many other fields. An essential task for experts is to share high-quality data providing the right information at the right time. Designing of data presentation can largely influence the user perception and the cognitive aspects of data interpretation. Significant amounts of data can be visualised in some way. One image can thus replace a considerable number of numeric tables and texts. The paper focuses on the accurate visualisation of data from the point of view of used colour schemes. Bad choose of colours can easily confuse the user and lead to the data misinterpretation. On the contrary, correctly created visualisations can make information transfer much simpler and more efficient.

  9. Compound image segmentation of published biomedical figures.

    PubMed

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  10. Automatic co-segmentation of lung tumor based on random forest in PET-CT images

    NASA Astrophysics Data System (ADS)

    Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian

    2016-03-01

    In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.

  11. Boundary segmentation for fluorescence microscopy using steerable filters

    NASA Astrophysics Data System (ADS)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  12. Protoporphyrin-IX fluorescence guided surgical resection in high-grade gliomas: The potential impact of human colour perception.

    PubMed

    Petterssen, Max; Eljamel, Sarah; Eljamel, Sam

    2014-09-01

    Protoporphyrin-IX (Pp-IX) fluorescence had been used frequently in recent years to guide microsurgical resection of high-grade gliomas (HGG), particularly following the publication of a randomized controlled trial demonstrating its advantages. However, Pp-IX fluorescence is dependent upon the surgeons' eyes' perception of red fluorescent colour. This study was designed to evaluate human eye fluorescence perception and establish a fluorescence scale. 20 of 108 pre-recorded images from intraoperative fluorescence of HGG were used to construct an 8-panel visual analogue fluorescence scale. The scale was validated by testing 56 participants with normal colour vision and three red-green colour-blind participants. For intra-rater agreement ten participants were tested twice and for inter-observer reliability the whole cohort were tested. The intra- and inter-observer reliability of the scale in normal colour vision participants was excellent. The scale was less reliable in the violet-blue panels of the scale. Colour-blind participants were not able to distinguish between red fluorescence and blue-violet colours. The 8-panel fluorescence scale is valid in differentiating red, pink and blue colours in a fluorescence surgical field among participants with normal colour perception and potentially useful to standardize fluorescence-guided surgery. However, colourblind surgeons should not use fluorescence-guided surgery. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Spatio-temporal colour correction of strongly degraded movies

    NASA Astrophysics Data System (ADS)

    Islam, A. B. M. Tariqul; Farup, Ivar

    2011-01-01

    The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms. We present an automatic colour correction technique for digital colour restoration of strongly degraded movie material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve an 80 percent reduction of the computational time compared to processing every single frame individually. We performed two user experiments and found that the visual quality of the resulting frames was significantly better than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality and computational efficiency.

  14. Elimination of RF inhomogeneity effects in segmentation.

    PubMed

    Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay

    2007-01-01

    There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.

  15. FogBank: a single cell segmentation across multiple cell lines and image modalities.

    PubMed

    Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary

    2014-12-30

    Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.

  16. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  17. Laser induced single spot oxidation of titanium

    NASA Astrophysics Data System (ADS)

    Jwad, Tahseen; Deng, Sunan; Butt, Haider; Dimov, S.

    2016-11-01

    Titanium oxides have a wide range of applications in industry, and they can be formed on pure titanium using different methods. Laser-induced oxidation is one of the most reliable methods due to its controllability and selectivity. Colour marking is one of the main applications of the oxidation process. However, the colourizing process based on laser scanning strategies is limited by the relative large processing area in comparison to the beam size. Single spot oxidation of titanium substrates is proposed in this research in order to increase the resolution of the processed area and also to address the requirements of potential new applications. The method is applied to produce oxide films with different thicknesses and hence colours on titanium substrates. High resolution colour image is imprinted on a sheet of pure titanium by converting its pixels' colours into laser parameter settings. Optical and morphological periodic surface structures are also produced by an array of oxide spots and then analysed. Two colours have been coded into one field and the dependencies of the reflected colours on incident and azimuthal angles of the light are discussed. The findings are of interest to a range of application areas, as they can be used to imprint optical devices such as diffusers and Fresnel lenses on metallic surfaces as well as for colour marking.

  18. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  19. 3D Texture Features Mining for MRI Brain Tumor Identification

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra

    2014-03-01

    Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.

  20. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  1. Live minimal path for interactive segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Chartrand, Gabriel; Tang, An; Chav, Ramnada; Cresson, Thierry; Chantrel, Steeve; De Guise, Jacques A.

    2015-03-01

    Medical image segmentation is nowadays required for medical device development and in a growing number of clinical and research applications. Since dedicated automatic segmentation methods are not always available, generic and efficient interactive tools can alleviate the burden of manual segmentation. In this paper we propose an interactive segmentation tool based on image warping and minimal path segmentation that is efficient for a wide variety of segmentation tasks. While the user roughly delineates the desired organs boundary, a narrow band along the cursors path is straightened, providing an ideal subspace for feature aligned filtering and minimal path algorithm. Once the segmentation is performed on the narrow band, the path is warped back onto the original image, precisely delineating the desired structure. This tool was found to have a highly intuitive dynamic behavior. It is especially efficient against misleading edges and required only coarse interaction from the user to achieve good precision. The proposed segmentation method was tested for 10 difficult liver segmentations on CT and MRI images, and the resulting 2D overlap Dice coefficient was 99% on average..

  2. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Y; Olsen, J.; Parikh, P.

    2014-06-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE),more » along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information, different filtering methods and their influences on the segmentation results. Parag Parikh receives research grant from ViewRay. Sasa Mutic has consulting and research agreements with ViewRay. Yanle Hu receives travel reimbursement from ViewRay. Iwan Kawrakow and James Dempsey are ViewRay employees.« less

  3. Automatic tissue image segmentation based on image processing and deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  4. Comparison and assessment of semi-automatic image segmentation in computed tomography scans for image-guided kidney surgery.

    PubMed

    Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L

    2011-11-01

    Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.

  5. Dual multispectral and 3D structured light laparoscope

    NASA Astrophysics Data System (ADS)

    Clancy, Neil T.; Lin, Jianyu; Arya, Shobhit; Hanna, George B.; Elson, Daniel S.

    2015-03-01

    Intraoperative feedback on tissue function, such as blood volume and oxygenation would be useful to the surgeon in cases where current clinical practice relies on subjective measures, such as identification of ischaemic bowel or tissue viability during anastomosis formation. Also, tissue surface profiling may be used to detect and identify certain pathologies, as well as diagnosing aspects of tissue health such as gut motility. In this paper a dual modality laparoscopic system is presented that combines multispectral reflectance and 3D surface imaging. White light illumination from a xenon source is detected by a laparoscope-mounted fast filter wheel camera to assemble a multispectral image (MSI) cube. Surface shape is then calculated using a spectrally-encoded structured light (SL) pattern detected by the same camera and triangulated using an active stereo technique. Images of porcine small bowel were acquired during open surgery. Tissue reflectance spectra were acquired and blood volume was calculated at each spatial pixel across the bowel wall and mesentery. SL features were segmented and identified using a `normalised cut' algoritm and the colour vector of each spot. Using the 3D geometry defined by the camera coordinate system the multispectral data could be overlaid onto the surface mesh. Dual MSI and SL imaging has the potential to provide augmented views to the surgeon supplying diagnostic information related to blood supply health and organ function. Future work on this system will include filter optimisation to reduce noise in tissue optical property measurement, and minimise spot identification errors in the SL pattern.

  6. Red is not a proxy signal for female genitalia in humans.

    PubMed

    Johns, Sarah E; Hargrave, Lucy A; Newton-Fisher, Nicholas E

    2012-01-01

    Red is a colour that induces physiological and psychological effects in humans, affecting competitive and sporting success, signalling and enhancing male social dominance. The colour is also associated with increased sexual attractiveness, such that women associated with red objects or contexts are regarded as more desirable. It has been proposed that human males have a biological predisposition towards the colour red such that it is 'sexually salient'. This hypothesis argues that women use the colour red to announce impending ovulation and sexual proceptivity, with this functioning as a proxy signal for genital colour, and that men show increased attraction in consequence. In the first test of this hypothesis, we show that contrary to the hypothesis, heterosexual men did not prefer redder female genitalia and, by extension, that red is not a proxy signal for genital colour. We found a relative preference for pinker genital images with redder genitalia rated significantly less sexually attractive. This effect was independent of raters' prior sexual experience and variation in female genital morphology. Our results refute the hypothesis that men's attraction to red is linked to an implied relationship to genital colour and women's signalling of fertility and sexual proceptivity.

  7. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  8. [Evaluation of Image Quality of Readout Segmented EPI with Readout Partial Fourier Technique].

    PubMed

    Yoshimura, Yuuki; Suzuki, Daisuke; Miyahara, Kanae

    Readout segmented EPI (readout segmentation of long variable echo-trains: RESOLVE) segmented k-space in the readout direction. By using the partial Fourier method in the readout direction, the imaging time was shortened. However, the influence on image quality due to insufficient data sampling is concerned. The setting of the partial Fourier method in the readout direction in each segment was changed. Then, we examined signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and distortion ratio for changes in image quality due to differences in data sampling. As the number of sampling segments decreased, SNR and CNR showed a low value. In addition, the distortion ratio did not change. The image quality of minimum sampling segments is greatly different from full data sampling, and caution is required when using it.

  9. Enhancement of the resolution of full-field optical coherence tomography by using a colour image sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalyanov, A L; Lychagov, V V; Smirnov, I V

    2013-08-31

    The influence of white balance in a colour image detector on the resolution of a full-field optical coherence tomograph (FFOCT) is studied. The change in the interference pulse width depending on the white balance tuning is estimated in the cases of a thermal radiation source (incandescent lamp) and a white light emitting diode. It is shown that by tuning white balance of the detector in a certain range, the FFOCT resolution can be increased by 20 % as compared to the resolution, attained with the use of a monochrome detector. (optical coherence tomography)

  10. An Intelligent Decision Support System for Leukaemia Diagnosis using Microscopic Blood Images.

    PubMed

    Chin Neoh, Siew; Srisukkham, Worawut; Zhang, Li; Todryk, Stephen; Greystoke, Brigit; Peng Lim, Chee; Alamgir Hossain, Mohammed; Aslam, Nauman

    2015-10-09

    This research proposes an intelligent decision support system for acute lymphoblastic leukaemia diagnosis from microscopic blood images. A novel clustering algorithm with stimulating discriminant measures (SDM) of both within- and between-cluster scatter variances is proposed to produce robust segmentation of nucleus and cytoplasm of lymphocytes/lymphoblasts. Specifically, the proposed between-cluster evaluation is formulated based on the trade-off of several between-cluster measures of well-known feature extraction methods. The SDM measures are used in conjuction with Genetic Algorithm for clustering nucleus, cytoplasm, and background regions. Subsequently, a total of eighty features consisting of shape, texture, and colour information of the nucleus and cytoplasm sub-images are extracted. A number of classifiers (multi-layer perceptron, Support Vector Machine (SVM) and Dempster-Shafer ensemble) are employed for lymphocyte/lymphoblast classification. Evaluated with the ALL-IDB2 database, the proposed SDM-based clustering overcomes the shortcomings of Fuzzy C-means which focuses purely on within-cluster scatter variance. It also outperforms Linear Discriminant Analysis and Fuzzy Compactness and Separation for nucleus-cytoplasm separation. The overall system achieves superior recognition rates of 96.72% and 96.67% accuracies using bootstrapping and 10-fold cross validation with Dempster-Shafer and SVM, respectively. The results also compare favourably with those reported in the literature, indicating the usefulness of the proposed SDM-based clustering method.

  11. An Intelligent Decision Support System for Leukaemia Diagnosis using Microscopic Blood Images

    PubMed Central

    Chin Neoh, Siew; Srisukkham, Worawut; Zhang, Li; Todryk, Stephen; Greystoke, Brigit; Peng Lim, Chee; Alamgir Hossain, Mohammed; Aslam, Nauman

    2015-01-01

    This research proposes an intelligent decision support system for acute lymphoblastic leukaemia diagnosis from microscopic blood images. A novel clustering algorithm with stimulating discriminant measures (SDM) of both within- and between-cluster scatter variances is proposed to produce robust segmentation of nucleus and cytoplasm of lymphocytes/lymphoblasts. Specifically, the proposed between-cluster evaluation is formulated based on the trade-off of several between-cluster measures of well-known feature extraction methods. The SDM measures are used in conjuction with Genetic Algorithm for clustering nucleus, cytoplasm, and background regions. Subsequently, a total of eighty features consisting of shape, texture, and colour information of the nucleus and cytoplasm sub-images are extracted. A number of classifiers (multi-layer perceptron, Support Vector Machine (SVM) and Dempster-Shafer ensemble) are employed for lymphocyte/lymphoblast classification. Evaluated with the ALL-IDB2 database, the proposed SDM-based clustering overcomes the shortcomings of Fuzzy C-means which focuses purely on within-cluster scatter variance. It also outperforms Linear Discriminant Analysis and Fuzzy Compactness and Separation for nucleus-cytoplasm separation. The overall system achieves superior recognition rates of 96.72% and 96.67% accuracies using bootstrapping and 10-fold cross validation with Dempster-Shafer and SVM, respectively. The results also compare favourably with those reported in the literature, indicating the usefulness of the proposed SDM-based clustering method. PMID:26450665

  12. Validation results of satellite mock-up capturing experiment using nets

    NASA Astrophysics Data System (ADS)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.

  13. Detection of bone disease by hybrid SST-watershed x-ray image segmentation

    NASA Astrophysics Data System (ADS)

    Sanei, Saeid; Azron, Mohammad; Heng, Ong Sim

    2001-07-01

    Detection of diagnostic features from X-ray images is favorable due to the low cost of these images. Accurate detection of the bone metastasis region greatly assists physicians to monitor the treatment and to remove the cancerous tissue by surgery. A hybrid SST-watershed algorithm, here, efficiently detects the boundary of the diseased regions. Shortest Spanning Tree (SST), based on graph theory, is one of the most powerful tools in grey level image segmentation. The method converts the images into arbitrary-shape closed segments of distinct grey levels. To do that, the image is initially mapped to a tree. Then using RSST algorithm the image is segmented to a certain number of arbitrary-shaped regions. However, in fine segmentation, over-segmentation causes loss of objects of interest. In coarse segmentation, on the other hand, SST-based method suffers from merging the regions belonged to different objects. By applying watershed algorithm, the large segments are divided into the smaller regions based on the number of catchment's basins for each segment. The process exploits bi-level watershed concept to separate each multi-lobe region into a number of areas each corresponding to an object (in our case a cancerous region of the bone,) disregarding their homogeneity in grey level.

  14. Finite grade pheromone ant colony optimization for image segmentation

    NASA Astrophysics Data System (ADS)

    Yuanjing, F.; Li, Y.; Liangjun, K.

    2008-06-01

    By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.

  15. Weighted image de-fogging using luminance dark prior

    NASA Astrophysics Data System (ADS)

    Kansal, Isha; Kasana, Singara Singh

    2017-10-01

    In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.

  16. A digital image-based method for determining of total acidity in red wines using acid-base titration without indicator.

    PubMed

    Tôrres, Adamastor Rodrigues; Lyra, Wellington da Silva; de Andrade, Stéfani Iury Evangelista; Andrade, Renato Allan Navarro; da Silva, Edvan Cirino; Araújo, Mário César Ugulino; Gaião, Edvaldo da Nóbrega

    2011-05-15

    This work proposes the use of digital image-based method for determination of total acidity in red wines by means of acid-base titration without using an external indicator or any pre-treatment of the sample. Digital images present the colour of the emergent radiation which is complementary to the radiation absorbed by anthocyanines present in wines. Anthocyanines change colour depending on the pH of the medium, and from the variation of colour in the images obtained during titration, the end point can be localized with accuracy and precision. RGB-based values were employed to build titration curves, and end points were localized by second derivative curves. The official method recommends potentiometric titration with a NaOH standard solution, and sample dilution until the pH reaches 8.2-8.4. In order to illustrate the feasibility of the proposed method, titrations of ten red wines were carried out. Results were compared with the reference method, and no statistically significant difference was observed between the results by applying the paired t-test at the 95% confidence level. The proposed method yielded more precise results than the official method. This is due to the trivariate nature of the measurements (RGB), associated with digital images. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Joint Segmentation of Anatomical and Functional Images: Applications in Quantification of Lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT Images

    PubMed Central

    Bagci, Ulas; Udupa, Jayaram K.; Mendhiratta, Neil; Foster, Brent; Xu, Ziyue; Yao, Jianhua; Chen, Xinjian; Mollura, Daniel J.

    2013-01-01

    We present a novel method for the joint segmentation of anatomical and functional images. Our proposed methodology unifies the domains of anatomical and functional images, represents them in a product lattice, and performs simultaneous delineation of regions based on random walk image segmentation. Furthermore, we also propose a simple yet effective object/background seed localization method to make the proposed segmentation process fully automatic. Our study uses PET, PET-CT, MRI-PET, and fused MRI-PET-CT scans (77 studies in all) from 56 patients who had various lesions in different body regions. We validated the effectiveness of the proposed method on different PET phantoms as well as on clinical images with respect to the ground truth segmentation provided by clinicians. Experimental results indicate that the presented method is superior to threshold and Bayesian methods commonly used in PET image segmentation, is more accurate and robust compared to the other PET-CT segmentation methods recently published in the literature, and also it is general in the sense of simultaneously segmenting multiple scans in real-time with high accuracy needed in routine clinical use. PMID:23837967

  18. Multi-atlas based segmentation using probabilistic label fusion with adaptive weighting of image similarity measures.

    PubMed

    Sjöberg, C; Ahnesjö, A

    2013-06-01

    Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Early evolution of multifocal optics for well-focused colour vision in vertebrates.

    PubMed

    Gustafsson, O S E; Collin, S P; Kröger, R H H

    2008-05-01

    Jawless fishes (Agnatha; lampreys and hagfishes) most closely resemble the earliest stage in vertebrate evolution and lamprey-like animals already existed in the Lower Cambrian [about 540 million years ago (MYA)]. Agnathans are thought to have separated from the main vertebrate lineage at least 500 MYA. Hagfishes have primitive eyes, but the eyes of adult lampreys are well-developed. The southern hemisphere lamprey, Geotria australis, possesses five types of opsin genes, three of which are clearly orthologous to the opsin genes of jawed vertebrates. This suggests that the last common ancestor of all vertebrate lineages possessed a complex colour vision system. In the eyes of many bony fishes and tetrapods, well-focused colour images are created by multifocal crystalline lenses that compensate for longitudinal chromatic aberration. To trace the evolutionary origins of multifocal lenses, we studied the optical properties of the lenses in four species of lamprey (Geotria australis, Mordacia praecox, Lampetra fluviatilis and Petromyzon marinus), with representatives from all three of the extant lamprey families. Multifocal lenses are present in all lampreys studied. This suggests that the ability to create well-focused colour images with multifocal optical systems also evolved very early.

  20. Monitoring the North Atlantic using ocean colour data

    NASA Astrophysics Data System (ADS)

    Fuentes-Yaco, C.; Caverhill, C.; Maass, H.; Porter, C.; White, GN, III

    2016-04-01

    The Remote Sensing Unit (RSU) at the Bedford Institute of Oceanography (BIO) has been monitoring the North Atlantic using ocean colour products for decades. Optical sensors used include CZCS, POLDER, SeaWiFS, MODIS/Aqua and MERIS. The monitoring area is defined by the Atlantic Zone Monitoring Program (AZMP) but certain products extend into Arctic waters, and all-Canadian waters which include the Pacific coast. RSU provides Level 3 images for various products in several formats and a range of temporal and spatial resolutions. Basic statistics for pre-defined areas of interest are compiled for each product. Climatologies and anomaly maps are also routinely produced, and custom products are delivered by request. RSU is involved in the generation of Level 4 products, such as characterizing the phenology of spring and fall phytoplankton blooms, computing primary production, using ocean colour to aid in EBSA (Ecologically and Biologically Significant Area) definition and developing habitat suitability maps. Upcoming operational products include maps of diatom distribution, biogeochemical province boundaries, and products from sensors such as VIIRS (Visible Infrared Imaging Radiometer Suite), OLCI (Ocean Land Colour Instrument), and PACE (Pre-Aerosol, Clouds and ocean Ecosystem) hyperspectral microsatellite mission.

  1. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  2. Graph run-length matrices for histopathological image segmentation.

    PubMed

    Tosun, Akif Burak; Gunduz-Demir, Cigdem

    2011-03-01

    The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.

  3. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    NASA Astrophysics Data System (ADS)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  4. Experimenting with cameraless photography using turmeric and borax: an introduction to photophysics

    NASA Astrophysics Data System (ADS)

    Appleyard, S. J.

    2012-07-01

    An alcoholic extract of the spice turmeric can be used to create a light-sensitive dye that can be used to stain paper. On exposure to sunlight, the dyed paper can be used to capture photographic images of flat objects or reproduce existing images through the preferential degradation of the dye in light-exposed areas over a time period of a few hours. The images can be developed and preserved by spraying the exposed paper with a dilute solution of borax, which forms coloured organo-boron complexes that limit further degradation of the dye and enhance the colour of the image. Similar photochemical reactions that lead to the degradation of the turmeric dye can also be used for reducing the organic pollution load in wastewater produced by many industrial processes and in dye-sensitized solar cells for producing electricity.

  5. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  6. An algorithm for calculi segmentation on ureteroscopic images.

    PubMed

    Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme

    2011-03-01

    The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.

  7. A Robust and Fast Method for Sidescan Sonar Image Segmentation Using Nonlocal Despeckling and Active Contour Model.

    PubMed

    Huo, Guanying; Yang, Simon X; Li, Qingwu; Zhou, Yan

    2017-04-01

    Sidescan sonar image segmentation is a very important issue in underwater object detection and recognition. In this paper, a robust and fast method for sidescan sonar image segmentation is proposed, which deals with both speckle noise and intensity inhomogeneity that may cause considerable difficulties in image segmentation. The proposed method integrates the nonlocal means-based speckle filtering (NLMSF), coarse segmentation using k -means clustering, and fine segmentation using an improved region-scalable fitting (RSF) model. The NLMSF is used before the segmentation to effectively remove speckle noise while preserving meaningful details such as edges and fine features, which can make the segmentation easier and more accurate. After despeckling, a coarse segmentation is obtained by using k -means clustering, which can reduce the number of iterations. In the fine segmentation, to better deal with possible intensity inhomogeneity, an edge-driven constraint is combined with the RSF model, which can not only accelerate the convergence speed but also avoid trapping into local minima. The proposed method has been successfully applied to both noisy and inhomogeneous sonar images. Experimental and comparative results on real and synthetic sonar images demonstrate that the proposed method is robust against noise and intensity inhomogeneity, and is also fast and accurate.

  8. The evolution of colour polymorphism in British winter-active Lepidoptera in response to search image use by avian predators.

    PubMed

    Weir, Jamie C

    2018-05-10

    Phenotypic polymorphism in cryptic species is widespread. This may evolve in response to search image use by predators exerting negative frequency-dependent selection on intraspecific colour morphs, 'apostatic selection'. Evidence exists to indicate search image formation by predators and apostatic selection operating on wild prey populations, though not to demonstrate search image use directly resulting in apostatic selection. The present study attempted to address this deficiency, using British Lepidoptera active in winter as a model system. It has been proposed that the typically polymorphic wing colouration of these species represents an anti-search image adaptation against birds. To test (a) for search image-driven apostatic selection, dimorphic populations of artificial moth-like models were established in woodland at varying relative morph frequencies and exposed to predation by natural populations of birds. In addition, to test (b) whether abundance and degree of polymorphism are correlated across British winter-active moths, as predicted where search image use drives apostatic selection, a series of phylogenetic comparative analyses were conducted. There was a positive relationship between artificial morph frequency and probability of predation, consistent with birds utilizing search images and exerting apostatic selection. Abundance and degree of polymorphism were found to be positively correlated across British Lepidoptera active in winter, though not across all taxonomic groups analysed. This evidence is consistent with polymorphism in this group having evolved in response to search image-driven apostatic selection and supports the viability of this mechanism as a means by which phenotypic and genetic variation may be maintained in natural populations. © 2018 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2018 European Society For Evolutionary Biology.

  9. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  10. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system. PACS number(s): 87.57.nm, 87.57.N-, 87.61.Tg. © 2016 The Authors.

  11. Multiresolution multiscale active mask segmentation of fluorescence microscope images

    NASA Astrophysics Data System (ADS)

    Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2009-08-01

    We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.

  12. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  13. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  14. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  15. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  16. Expedient range enhanced 3-D robot colour vision

    NASA Astrophysics Data System (ADS)

    Jarvis, R. A.

    1983-01-01

    Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.

  17. Segmentation of white rat sperm image

    NASA Astrophysics Data System (ADS)

    Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan

    2011-11-01

    The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.

  18. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less

  19. Adaptive technique for matching the spectral response in skin lesions' images

    NASA Astrophysics Data System (ADS)

    Pavlova, P.; Borisova, E.; Pavlova, E.; Avramov, L.

    2015-03-01

    The suggested technique is a subsequent stage for data obtaining from diffuse reflectance spectra and images of diseased tissue with a final aim of skin cancer diagnostics. Our previous work allows us to extract patterns for some types of skin cancer, as a ratio between spectra, obtained from healthy and diseased tissue in the range of 380 - 780 nm region. The authenticity of the patterns depends on the tested point into the area of lesion, and the resulting diagnose could also be fixed with some probability. In this work, two adaptations are implemented to localize pixels of the image lesion, where the reflectance spectrum corresponds to pattern. First adapts the standard to the personal patient and second - translates the spectrum white point basis to the relative white point of the image. Since the reflectance spectra and the image pixels are regarding to different white points, a correction of the compared colours is needed. The latest is done using a standard method for chromatic adaptation. The technique follows the steps below: -Calculation the colorimetric XYZ parameters for the initial white point, fixed by reflectance spectrum from healthy tissue; -Calculation the XYZ parameters for the distant white point on the base of image of nondiseased tissue; -Transformation the XYZ parameters for the test-spectrum by obtained matrix; -Finding the RGB values of the XYZ parameters for the test-spectrum according sRGB; Finally, the pixels of the lesion's image, corresponding to colour from the test-spectrum and particular diagnostic pattern are marked with a specific colour.

  20. Automatic segmentation of right ventricular ultrasound images using sparse matrix transform and a level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Fei, Baowei

    2013-11-01

    An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  1. Automated tissue segmentation of MR brain images in the presence of white matter lesions.

    PubMed

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier

    2017-01-01

    Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A hybrid approach of using symmetry technique for brain tumor segmentation.

    PubMed

    Saddique, Mubbashar; Kazmi, Jawad Haider; Qureshi, Kalim

    2014-01-01

    Tumor and related abnormalities are a major cause of disability and death worldwide. Magnetic resonance imaging (MRI) is a superior modality due to its noninvasiveness and high quality images of both the soft tissues and bones. In this paper we present two hybrid segmentation techniques and their results are compared with well-recognized techniques in this area. The first technique is based on symmetry and we call it a hybrid algorithm using symmetry and active contour (HASA). In HASA, we take refection image, calculate the difference image, and then apply the active contour on the difference image to segment the tumor. To avoid unimportant segmented regions, we improve the results by proposing an enhancement in the form of the second technique, EHASA. In EHASA, we also take reflection of the original image, calculate the difference image, and then change this image into a binary image. This binary image is mapped onto the original image followed by the application of active contouring to segment the tumor region.

  3. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  4. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    NASA Astrophysics Data System (ADS)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  5. Visualization of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander

    2007-04-01

    We developed four new techniques to visualize hyper spectral image data for man-in-the-loop target detection. The methods respectively: (1) display the subsequent bands as a movie ("movie"), (2) map the data onto three channels and display these as a colour image ("colour"), (3) display the correlation between the pixel signatures and a known target signature ("match") and (4) display the output of a standard anomaly detector ("anomaly"). The movie technique requires no assumptions about the target signature and involves no information loss. The colour technique produces a single image that can be displayed in real-time. A disadvantage of this technique is loss of information. A display of the match between a target signature and pixels and can be interpreted easily and fast, but this technique relies on precise knowledge of the target signature. The anomaly detector signifies pixels with signatures that deviate from the (local) background. We performed a target detection experiment with human observers to determine their relative performance with the four techniques,. The results show that the "match" presentation yields the best performance, followed by "movie" and "anomaly", while performance with the "colour" presentation was the poorest. Each scheme has its advantages and disadvantages and is more or less suited for real-time and post-hoc processing. The rationale is that the final interpretation is best done by a human observer. In contrast to automatic target recognition systems, the interpretation of hyper spectral imagery by the human visual system is robust to noise and image transformations and requires a minimal number of assumptions (about signature of target and background, target shape etc.) When more knowledge about target and background is available this may be used to help the observer interpreting the data (aided target detection).

  6. Improving Brain Magnetic Resonance Image (MRI) Segmentation via a Novel Algorithm based on Genetic and Regional Growth

    PubMed Central

    A., Javadpour; A., Mohammadi

    2016-01-01

    Background Regarding the importance of right diagnosis in medical applications, various methods have been exploited for processing medical images solar. The method of segmentation is used to analyze anal to miscall structures in medical imaging. Objective This study describes a new method for brain Magnetic Resonance Image (MRI) segmentation via a novel algorithm based on genetic and regional growth. Methods Among medical imaging methods, brains MRI segmentation is important due to high contrast of non-intrusive soft tissue and high spatial resolution. Size variations of brain tissues are often accompanied by various diseases such as Alzheimer’s disease. As our knowledge about the relation between various brain diseases and deviation of brain anatomy increases, MRI segmentation is exploited as the first step in early diagnosis. In this paper, regional growth method and auto-mate selection of initial points by genetic algorithm is used to introduce a new method for MRI segmentation. Primary pixels and similarity criterion are automatically by genetic algorithms to maximize the accuracy and validity in image segmentation. Results By using genetic algorithms and defining the fixed function of image segmentation, the initial points for the algorithm were found. The proposed algorithms are applied to the images and results are manually selected by regional growth in which the initial points were compared. The results showed that the proposed algorithm could reduce segmentation error effectively. Conclusion The study concluded that the proposed algorithm could reduce segmentation error effectively and help us to diagnose brain diseases. PMID:27672629

  7. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    NASA Astrophysics Data System (ADS)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  8. A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks

    PubMed Central

    Wang, Changjian; Liu, Xiaohui; Jin, Shiyao

    2018-01-01

    Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227

  9. A general system for automatic biomedical image segmentation using intensity neighborhoods.

    PubMed

    Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K

    2011-01-01

    Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.

  10. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  11. Correction tool for Active Shape Model based lumbar muscle segmentation.

    PubMed

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  12. A semiautomatic segmentation method for prostate in CT images using local texture classification and statistical shape modeling.

    PubMed

    Shahedi, Maysam; Halicek, Martin; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2018-06-01

    Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation. © 2018 American Association of Physicists in Medicine.

  13. A laboratory procedure for measuring and georeferencing soil colour

    NASA Astrophysics Data System (ADS)

    Marques-Mateu, A.; Balaguer-Puig, M.; Moreno-Ramon, H.; Ibanez-Asensio, S.

    2015-04-01

    Remote sensing and geospatial applications very often require ground truth data to assess outcomes from spatial analyses or environmental models. Those data sets, however, may be difficult to collect in proper format or may even be unavailable. In the particular case of soil colour the collection of reliable ground data can be cumbersome due to measuring methods, colour communication issues, and other practical factors which lead to a lack of standard procedure for soil colour measurement and georeferencing. In this paper we present a laboratory procedure that provides colour coordinates of georeferenced soil samples which become useful in later processing stages of soil mapping and classification from digital images. The procedure requires a laboratory setup consisting of a light booth and a trichromatic colorimeter, together with a computer program that performs colour measurement, storage, and colour space transformation tasks. Measurement tasks are automated by means of specific data logging routines which allow storing recorded colour data in a spatial format. A key feature of the system is the ability of transforming between physically-based colour spaces and the Munsell system which is still the standard in soil science. The working scheme pursues the automation of routine tasks whenever possible and the avoidance of input mistakes by means of a convenient layout of the user interface. The program can readily manage colour and coordinate data sets which eventually allow creating spatial data sets. All the tasks regarding data joining between colorimeter measurements and samples locations are executed by the software in the background, allowing users to concentrate on samples processing. As a result, we obtained a robust and fully functional computer-based procedure which has proven a very useful tool for sample classification or cataloging purposes as well as for integrating soil colour data with other remote sensed and spatial data sets.

  14. Contextually guided very-high-resolution imagery classification with semantic segments

    NASA Astrophysics Data System (ADS)

    Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.

    2017-10-01

    Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).

  15. Permian-Triassic boundary microbialites at Zuodeng Section, Guangxi Province, South China: Geobiology and palaeoceanographic implications

    NASA Astrophysics Data System (ADS)

    Fang, Yuheng; Chen, Zhong-Qiang; Kershaw, Stephen; Yang, Hao; Luo, Mao

    2017-05-01

    A previously unknown microbialite bed in the Permian-Triassic (P-Tr) boundary beds of Zuodeng section, Tiandong County, Guangxi, South China comprises a thin (5 cm maximum thickness) stromatolite in the lower part and the remaining 6 m is thrombolite. The Zuodeng microbialite has a pronounced irregular contact between the latest Permian bioclastic limestone and microbialite, as in other sites in the region. The stromatolite comprises low-relief columnar and broad domal geometries, containing faint laminations. The thrombolite displays an irregular mixture of sparitic dark coloured altered microbial fabric and light coloured interstitial sediment in polished blocks. Abundant microproblematic calcimicrobe structures identified here as Gakhumella are preserved in dark coloured laminated areas of the stromatolite and sparitic areas in thrombolites (i.e. the calcimicrobial part, not the interstitial sediment) and are orientated perpendicular to stromatolitic laminae. Each Gakhumella individual has densely arranged segments, which form a column- to fan-shaped structure. Single segments are arch-shaped and form a thin chamber between segments. Gakhumella individuals in the stromatolite and thrombolite are slightly different from each other, but are readily distinguished from the Gakhumella- and Renalcis-like fossils reported from other P-Tr boundary microbialites in having a smaller size, unbranching columns and densely arranged, arch-shaped segments. Renalcids usually possess a larger body size and branching, lobate outlines. Filament sheath aggregates are also observed in the stromatolite and they are all orientated in one direction. Both Gakhumella and filament sheath aggregates may be photosynthetic algae, which may have played an important role in constructing the Zuodeng microbialites. Other calcimicrobes in the Zuodeng microbialite are spheroids, of which a total of five morphological types are recognized from both stromatolite and thrombolite: (1) sparry calcite spheroid without outer sheaths, (2) a large sparry calcite nucleus coated with a thin sparry calcite sheath, (3) a large nucleus of micrite framboid aggregates rimmed by a thin sparry calcite sheath (bacterial clump-like spheroids), (4) a large nucleus of micrite framboid aggregates coated with a thin micritic sheath, and (5) a small sparry nuclei rimmed by coarse-grained, radiated euhedral rays. The irregular contact beneath the Zuodeng microbialites is interpreted as a subaerial exposure surface due to regional regression in South China. The demise of the Zuodeng microbialites may have been due to rapid rise in sea-level because they grew in relatively shallow marine conditions and are overlain by muddy limestones containing pelagic conodonts. Also siliciclastic content increases above the microbialite, suggesting a possible climate-related increase in weathering as the transgression progressed.

  16. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    ERIC Educational Resources Information Center

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  17. Large size three-dimensional video by electronic holography using multiple spatial light modulators

    PubMed Central

    Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori

    2014-01-01

    In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees. PMID:25146685

  18. Large size three-dimensional video by electronic holography using multiple spatial light modulators.

    PubMed

    Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori

    2014-08-22

    In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.

  19. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  20. Molar axis estimation from computed tomography images.

    PubMed

    Dongxia Zhang; Yangzhou Gan; Zeyang Xia; Xinwen Zhou; Shoubin Liu; Jing Xiong; Guanglin Li

    2016-08-01

    Estimation of tooth axis is needed for some clinical dental treatment. Existing methods require to segment the tooth volume from Computed Tomography (CT) images, and then estimate the axis from the tooth volume. However, they may fail during estimating molar axis due to that the tooth segmentation from CT images is challenging and current segmentation methods may get poor segmentation results especially for these molars with angle which will result in the failure of axis estimation. To resolve this problem, this paper proposes a new method for molar axis estimation from CT images. The key innovation point is that: instead of estimating the 3D axis of each molar from the segmented volume, the method estimates the 3D axis from two projection images. The method includes three steps. (1) The 3D images of each molar are projected to two 2D image planes. (2) The molar contour are segmented and the contour's 2D axis are extracted in each 2D projection image. Principal Component Analysis (PCA) and a modified symmetry axis detection algorithm are employed to extract the 2D axis from the segmented molar contour. (3) A 3D molar axis is obtained by combining the two 2D axes. Experimental results verified that the proposed method was effective to estimate the axis of molar from CT images.

  1. Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation

    PubMed Central

    Maji, Pradipta; Roy, Shaswati

    2015-01-01

    Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961

  2. Colour vision impairment is associated with disease severity in multiple sclerosis.

    PubMed

    Martínez-Lapiscina, Elena H; Ortiz-Pérez, Santiago; Fraga-Pumar, Elena; Martínez-Heras, Eloy; Gabilondo, Iñigo; Llufriu, Sara; Bullich, Santiago; Figueras, Marc; Saiz, Albert; Sánchez-Dalmau, Bernardo; Villoslada, Pablo

    2014-08-01

    Colour vision assessment correlates with damage of the visual pathway and might be informative of overall brain damage in multiple sclerosis (MS). The objective of this paper is to investigate the association between impaired colour vision and disease severity. We performed neurological and ophthalmic examinations, as well as magnetic resonance imaging (MRI) and optical coherence tomography (OCT) analyses, on 108 MS patients, both at baseline and after a follow-up of one year. Colour vision was evaluated by Hardy, Rand and Rittler plates. Dyschromatopsia was defined if colour vision was impaired in either eye, except for participants with optic neuritis (ON), for whom only the unaffected eye was considered. We used general linear models adjusted for sex, age, disease duration and MS treatment for comparing presence of dyschromatopsia and disease severity. Impaired colour vision in non-ON eyes was detected in 21 out of 108 patients at baseline. At baseline, patients with dyschromatopsia had lower Multiple Sclerosis Functional Composite (MSFC) scores and Brief Repeatable Battery-Neuropsychology executive function scores than those participants with normal colour vision. In addition, these patients had thinner retinal nerve fiber layer (RNFL), and smaller macular volume, normalized brain volume and normalized gray matter volume (NGMV) at baseline. Moreover, participants with incident dyschromatopsia after one-year follow-up had a greater disability measured by the Expanded Disability Status Scale and MSFC-20 and a greater decrease in NGMV than participants with normal colour vision. Colour vision impairment is associated with greater MS severity. © The Author(s) 2013.

  3. Solvent-free optical recording of structural colours on pre-imprinted photocrosslinkable nanostructures

    NASA Astrophysics Data System (ADS)

    Jiang, Hao; Rezaei, Mohamad; Abdolahi, Mahssa; Kaminska, Bozena

    2017-09-01

    Optical digital information storage media, despite their ever-increasing storage capacity and data transfer rate, are vulnerable to the potential risk of turning inaccessible. For this reason, long-term eye-readable full-colour optical archival storage is in high demand for preserving valuable information from cultural, intellectual, and scholarly resources. However, the concurrent requirements in recording colours inexpensively and precisely, and preserving colours for the very long term (for at least 100 years), have not yet been met by existing storage techniques. Structural colours hold the promise to overcome such challenges. However, there is still the lack of an inexpensive, rapid, reliable, and solvent-free optical patterning technique for recording structural colours. In this paper, we introduce an enabling technique based on optical and thermal patterning of nanoimprinted SU-8 nanocone arrays. Using photocrosslinking and thermoplastic flow of SU-8, diffractive structural colours of nanocone arrays are recorded using ultra-violet (UV) exposure followed by the thermal development and reshaping of nanocones. Different thermal treatment procedures in reshaping nanocones are investigated and compared, and two-step progressive baking is found to allow the controllable reshaping of nanocones. The height of the nanocones and brightness of diffractive colours are modulated by varying the UV exposure dose to enable grey-scale patterning. An example of recorded full-colour image through half-tone patterning is also demonstrated. The presented technique requires only low-power continuous-wave UV light and is very promising to be adopted for professional and consumer archival storage applications.

  4. The Impact of Manual Segmentation of CT Images on Monte Carlo Based Skeletal Dosimetry

    NASA Astrophysics Data System (ADS)

    Frederick, Steve; Jokisch, Derek; Bolch, Wesley; Shah, Amish; Brindle, Jim; Patton, Phillip; Wyler, J. S.

    2004-11-01

    Radiation doses to the skeleton from internal emitters are of importance in both protection of radiation workers and patients undergoing radionuclide therapies. Improved dose estimates involve obtaining two sets of medical images. The first image provides the macroscopic boundaries (spongiosa volume and cortical shell) of the individual skeletal sites. A second, higher resolution image of the spongiosa microstructure is also obtained. These image sets then provide the geometry for a Monte Carlo radiation transport code. Manual segmentation of the first image is required in order to provide the macrostructural data. For this study, multiple segmentations of the same CT image were performed by multiple individuals. The segmentations were then used in the transport code and the results compared in order to determine the impact of differing segmentations on the skeletal doses. This work has provided guidance on the extent of training required of the manual segmenters. (This work was supported by a grant from the National Institute of Health.)

  5. The effect of skin surface topography and skin colouration cues on perception of male facial age, health and attractiveness.

    PubMed

    Fink, B; Matts, P J; Brauckmann, C; Gundlach, S

    2018-04-01

    Previous studies investigating the effects of skin surface topography and colouration cues on the perception of female faces reported a differential weighting for the perception of skin topography and colour evenness, where topography was a stronger visual cue for the perception of age, whereas skin colour evenness was a stronger visual cue for the perception of health. We extend these findings in a study of the effect of skin surface topography and colour evenness cues on the perceptions of facial age, health and attractiveness in males. Facial images of six men (aged 40 to 70 years), selected for co-expression of lines/wrinkles and discolouration, were manipulated digitally to create eight stimuli, namely, separate removal of these two features (a) on the forehead, (b) in the periorbital area, (c) on the cheeks and (d) across the entire face. Omnibus (within-face) pairwise combinations, including the original (unmodified) face, were presented to a total of 240 male and female judges, who selected the face they considered younger, healthier and more attractive. Significant effects were detected for facial image choice, in response to skin feature manipulation. The combined removal of skin surface topography resulted in younger age perception compared with that seen with the removal of skin colouration cues, whereas the opposite pattern was found for health preference. No difference was detected for the perception of attractiveness. These perceptual effects were seen particularly on the forehead and cheeks. Removing skin topography cues (but not discolouration) in the periorbital area resulted in higher preferences for all three attributes. Skin surface topography and colouration cues affect the perception of age, health and attractiveness in men's faces. The combined removal of these features on the forehead, cheeks and in the periorbital area results in the most positive assessments. © 2018 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  6. Three-dimensional segmentation of luminal and adventitial borders in serial intravascular ultrasound images

    NASA Technical Reports Server (NTRS)

    Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.

    1999-01-01

    Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.

  7. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to eachmore » chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  8. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions

    PubMed Central

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-01-01

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application. PMID:27924935

  9. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  10. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions.

    PubMed

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-12-07

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.

  11. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    PubMed Central

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on‐board MR‐IGRT system. PACS number(s): 87.57.nm, 87.57.N‐, 87.61.Tg

  12. Brain Tumor Image Segmentation in MRI Image

    NASA Astrophysics Data System (ADS)

    Peni Agustin Tjahyaningtijas, Hapsari

    2018-04-01

    Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.

  13. Vessel segmentation in 4D arterial spin labeling magnetic resonance angiography images of the brain

    NASA Astrophysics Data System (ADS)

    Phellan, Renzo; Lindner, Thomas; Falcão, Alexandre X.; Forkert, Nils D.

    2017-03-01

    4D arterial spin labeling magnetic resonance angiography (4D ASL MRA) is a non-invasive and safe modality for cerebrovascular imaging procedures. It uses the patient's magnetically labeled blood as intrinsic contrast agent, so that no external contrast media is required. It provides important 3D structure and blood flow information but a sufficient cerebrovascular segmentation is important since it can help clinicians to analyze and diagnose vascular diseases faster, and with higher confidence as compared to simple visual rating of raw ASL MRA images. This work presents a new method for automatic cerebrovascular segmentation in 4D ASL MRA images of the brain. In this process images are denoised, corresponding image label/control image pairs of the 4D ASL MRA sequences are subtracted, and temporal intensity averaging is used to generate a static representation of the vascular system. After that, sets of vessel and background seeds are extracted and provided as input for the image foresting transform algorithm to segment the vascular system. Four 4D ASL MRA datasets of the brain arteries of healthy subjects and corresponding time-of-flight (TOF) MRA images were available for this preliminary study. For evaluation of the segmentation results of the proposed method, the cerebrovascular system was automatically segmented in the high-resolution TOF MRA images using a validated algorithm and the segmentation results were registered to the 4D ASL datasets. Corresponding segmentation pairs were compared using the Dice similarity coefficient (DSC). On average, a DSC of 0.9025 was achieved, indicating that vessels can be extracted successfully from 4D ASL MRA datasets by the proposed segmentation method.

  14. Automated segmentation of 3D anatomical structures on CT images by using a deep convolutional network based on end-to-end learning approach

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2017-02-01

    We have proposed an end-to-end learning approach that trained a deep convolutional neural network (CNN) for automatic CT image segmentation, which accomplished a voxel-wised multiple classification to directly map each voxel on 3D CT images to an anatomical label automatically. The novelties of our proposed method were (1) transforming the anatomical structures segmentation on 3D CT images into a majority voting of the results of 2D semantic image segmentation on a number of 2D-slices from different image orientations, and (2) using "convolution" and "deconvolution" networks to achieve the conventional "coarse recognition" and "fine extraction" functions which were integrated into a compact all-in-one deep CNN for CT image segmentation. The advantage comparing to previous works was its capability to accomplish real-time image segmentations on 2D slices of arbitrary CT-scan-range (e.g. body, chest, abdomen) and produced correspondingly-sized output. In this paper, we propose an improvement of our proposed approach by adding an organ localization module to limit CT image range for training and testing deep CNNs. A database consisting of 240 3D CT scans and a human annotated ground truth was used for training (228 cases) and testing (the remaining 12 cases). We applied the improved method to segment pancreas and left kidney regions, respectively. The preliminary results showed that the accuracies of the segmentation results were improved significantly (pancreas was 34% and kidney was 8% increased in Jaccard index from our previous results). The effectiveness and usefulness of proposed improvement for CT image segmentations were confirmed.

  15. Hybrid active contour model for inhomogeneous image segmentation with background estimation

    NASA Astrophysics Data System (ADS)

    Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun

    2018-03-01

    This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.

  16. Survey statistics of automated segmentations applied to optical imaging of mammalian cells.

    PubMed

    Bajcsy, Peter; Cardone, Antonio; Chalfoun, Joe; Halter, Michael; Juba, Derek; Kociolek, Marcin; Majurski, Michael; Peskin, Adele; Simon, Carl; Simon, Mylene; Vandecreme, Antoine; Brady, Mary

    2015-10-15

    The goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements. We define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories. The survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue. The novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.

  17. Achromatic shearing phase sensor for generating images indicative of measure(s) of alignment between segments of a segmented telescope's mirrors

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip (Inventor); Walker, Chanda Bartlett (Inventor)

    2006-01-01

    An achromatic shearing phase sensor generates an image indicative of at least one measure of alignment between two segments of a segmented telescope's mirrors. An optical grating receives at least a portion of irradiance originating at the segmented telescope in the form of a collimated beam and the collimated beam into a plurality of diffraction orders. Focusing optics separate and focus the diffraction orders. Filtering optics then filter the diffraction orders to generate a resultant set of diffraction orders that are modified. Imaging optics combine portions of the resultant set of diffraction orders to generate an interference pattern that is ultimately imaged by an imager.

  18. Medical image segmentation using genetic algorithms.

    PubMed

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  19. Applications of magnetic resonance image segmentation in neurology

    NASA Astrophysics Data System (ADS)

    Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu

    1999-05-01

    After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.

  20. Semiautomatic tumor segmentation with multimodal images in a conditional random field framework.

    PubMed

    Hu, Yu-Chi; Grossberg, Michael; Mageras, Gikas

    2016-04-01

    Volumetric medical images of a single subject can be acquired using different imaging modalities, such as computed tomography, magnetic resonance imaging (MRI), and positron emission tomography. In this work, we present a semiautomatic segmentation algorithm that can leverage the synergies between different image modalities while integrating interactive human guidance. The algorithm provides a statistical segmentation framework partly automating the segmentation task while still maintaining critical human oversight. The statistical models presented are trained interactively using simple brush strokes to indicate tumor and nontumor tissues and using intermediate results within a patient's image study. To accomplish the segmentation, we construct the energy function in the conditional random field (CRF) framework. For each slice, the energy function is set using the estimated probabilities from both user brush stroke data and prior approved segmented slices within a patient study. The progressive segmentation is obtained using a graph-cut-based minimization. Although no similar semiautomated algorithm is currently available, we evaluated our method with an MRI data set from Medical Image Computing and Computer Assisted Intervention Society multimodal brain segmentation challenge (BRATS 2012 and 2013) against a similar fully automatic method based on CRF and a semiautomatic method based on grow-cut, and our method shows superior performance.

Top