Sample records for image processing component

  1. 76 FR 55944 - In the Matter of Certain Electronic Devices With Image Processing Systems, Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-09

    ... With Image Processing Systems, Components Thereof, and Associated Software; Notice of Commission... importation of certain electronic devices with image processing systems, components thereof, and associated... direct infringement is asserted and the accused article does not meet every limitation of the asserted...

  2. Error-proofing test system of industrial components based on image processing

    NASA Astrophysics Data System (ADS)

    Huang, Ying; Huang, Tao

    2018-05-01

    Due to the improvement of modern industrial level and accuracy, conventional manual test fails to satisfy the test standards of enterprises, so digital image processing technique should be utilized to gather and analyze the information on the surface of industrial components, so as to achieve the purpose of test. To test the installation parts of automotive engine, this paper employs camera to capture the images of the components. After these images are preprocessed including denoising, the image processing algorithm relying on flood fill algorithm is used to test the installation of the components. The results prove that this system has very high test accuracy.

  3. Analysis of 3D Subharmonic Ultrasound Signals from Patients with Known Breast Masses for Lesion Differentiation

    DTIC Science & Technology

    2012-10-01

    separate image processing course were attended and this programming language will be used for the research component of this project. Subharmonic...4 5 BODY ...lesions. 5 BODY 5.1 Training Component The training component of this research has been split into breast imaging and image processing arms

  4. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  5. 75 FR 38118 - In the Matter of Certain Electronic Devices With Image Processing Systems, Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... With Image Processing Systems, Components Thereof, and Associated Software; Notice of Investigation..., and associated software by reason of infringement of certain claims of U.S. Patent Nos. 7,043,087... processing systems, components thereof, and associated software that infringe one or more of claims 1, 6, and...

  6. Digital imaging technology assessment: Digital document storage project

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.

  7. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  8. 3D printing in X-ray and Gamma-Ray Imaging: A novel method for fabricating high-density imaging apertures☆

    PubMed Central

    Miller, Brian W.; Moore, Jared W.; Barrett, Harrison H.; Fryé, Teresa; Adler, Steven; Sery, Joe; Furenlid, Lars R.

    2011-01-01

    Advances in 3D rapid-prototyping printers, 3D modeling software, and casting techniques allow for cost-effective fabrication of custom components in gamma-ray and X-ray imaging systems. Applications extend to new fabrication methods for custom collimators, pinholes, calibration and resolution phantoms, mounting and shielding components, and imaging apertures. Details of the fabrication process for these components, specifically the 3D printing process, cold casting with a tungsten epoxy, and lost-wax casting in platinum are presented. PMID:22199414

  9. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  10. Novel Applications of Rapid Prototyping in Gamma-ray and X-ray Imaging

    PubMed Central

    Miller, Brian W.; Moore, Jared W.; Gehm, Michael E.; Furenlid, Lars R.; Barrett, Harrison H.

    2010-01-01

    Advances in 3D rapid-prototyping printers, 3D modeling software, and casting techniques allow for the fabrication of cost-effective, custom components in gamma-ray and x-ray imaging systems. Applications extend to new fabrication methods for custom collimators, pinholes, calibration and resolution phantoms, mounting and shielding components, and imaging apertures. Details of the fabrication process for these components are presented, specifically the 3D printing process, cold casting with a tungsten epoxy, and lost-wax casting in platinum. PMID:22984341

  11. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...

  12. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...

  13. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...

  14. 21 CFR 892.2050 - Picture archiving and communications system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...

  15. 75 FR 68200 - Medical Devices; Radiology Devices; Reclassification of Full-Field Digital Mammography System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-05

    ... exposure control, image processing and reconstruction programs, patient and equipment supports, component..., acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and... may include was revised by adding automatic exposure control, image processing and reconstruction...

  16. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques.

    PubMed

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.

  17. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques

    PubMed Central

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920

  18. Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency

    PubMed Central

    Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi

    2013-01-01

    Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687

  19. IDAPS (Image Data Automated Processing System) System Description

    DTIC Science & Technology

    1988-06-24

    This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed

  20. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

    USDA-ARS?s Scientific Manuscript database

    Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

  1. Recognition-by-Components: A Theory of Human Image Understanding.

    ERIC Educational Resources Information Center

    Biederman, Irving

    1987-01-01

    The theory proposed (recognition-by-components) hypothesizes the perceptual recognition of objects to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components. Experiments on the perception of briefly presented pictures support the theory. (Author/LMO)

  2. Image reconstruction: an overview for clinicians.

    PubMed

    Hansen, Michael S; Kellman, Peter

    2015-03-01

    Image reconstruction plays a critical role in the clinical use of magnetic resonance imaging (MRI). The MRI raw data is not acquired in image space and the role of the image reconstruction process is to transform the acquired raw data into images that can be interpreted clinically. This process involves multiple signal processing steps that each have an impact on the image quality. This review explains the basic terminology used for describing and quantifying image quality in terms of signal-to-noise ratio and point spread function. In this context, several commonly used image reconstruction components are discussed. The image reconstruction components covered include noise prewhitening for phased array data acquisition, interpolation needed to reconstruct square pixels, raw data filtering for reducing Gibbs ringing artifacts, Fourier transforms connecting the raw data with image space, and phased array coil combination. The treatment of phased array coils includes a general explanation of parallel imaging as a coil combination technique. The review is aimed at readers with no signal processing experience and should enable them to understand what role basic image reconstruction steps play in the formation of clinical images and how the resulting image quality is described. © 2014 Wiley Periodicals, Inc.

  3. Combined Acquisition/Processing For Data Reduction

    NASA Astrophysics Data System (ADS)

    Kruger, Robert A.

    1982-01-01

    Digital image processing systems necessarily consist of three components: acquisition, storage/retrieval and processing. The acquisition component requires the greatest data handling rates. By coupling together the acquisition witn some online hardwired processing, data rates and capacities for short term storage can be reduced. Furthermore, long term storage requirements can be reduced further by appropriate processing and editing of image data contained in short term memory. The net result could be reduced performance requirements for mass storage, processing and communication systems. Reduced amounts of data also snouid speed later data analysis and diagnostic decision making.

  4. My Body Looks Like That Girl’s: Body Mass Index Modulates Brain Activity during Body Image Self-Reflection among Young Women

    PubMed Central

    Wen, Xin; She, Ying; Vinke, Petra Corianne; Chen, Hong

    2016-01-01

    Body image distress or body dissatisfaction is one of the most common consequences of obesity and overweight. We investigated the neural bases of body image processing in overweight and average weight young women to understand whether brain regions that were previously found to be involved in processing self-reflective, perspective and affective components of body image would show different activation between two groups. Thirteen overweight (O-W group, age = 20.31±1.70 years) and thirteen average weight (A-W group, age = 20.15±1.62 years) young women underwent functional magnetic resonance imaging while performing a body image self-reflection task. Among both groups, whole-brain analysis revealed activations of a brain network related to perceptive and affective components of body image processing. ROI analysis showed a main effect of group in ACC as well as a group by condition interaction within bilateral EBA, bilateral FBA, right IPL, bilateral DLPFC, left amygdala and left MPFC. For the A-W group, simple effect analysis revealed stronger activations in Thin-Control compared to Fat-Control condition within regions related to perceptive (including bilateral EBA, bilateral FBA, right IPL) and affective components of body image processing (including bilateral DLPFC, left amygdala), as well as self-reference (left MPFC). The O-W group only showed stronger activations in Fat-Control than in Thin-Control condition within regions related to the perceptive component of body image processing (including left EBA and left FBA). Path analysis showed that in the Fat-Thin contrast, body dissatisfaction completely mediated the group difference in brain response in left amygdala across the whole sample. Our data are the first to demonstrate differences in brain response to body pictures between average weight and overweight young females involved in a body image self-reflection task. These results provide insights for understanding the vulnerability to body image distress among overweight or obese young females. PMID:27764116

  5. My Body Looks Like That Girl's: Body Mass Index Modulates Brain Activity during Body Image Self-Reflection among Young Women.

    PubMed

    Gao, Xiao; Deng, Xiao; Wen, Xin; She, Ying; Vinke, Petra Corianne; Chen, Hong

    2016-01-01

    Body image distress or body dissatisfaction is one of the most common consequences of obesity and overweight. We investigated the neural bases of body image processing in overweight and average weight young women to understand whether brain regions that were previously found to be involved in processing self-reflective, perspective and affective components of body image would show different activation between two groups. Thirteen overweight (O-W group, age = 20.31±1.70 years) and thirteen average weight (A-W group, age = 20.15±1.62 years) young women underwent functional magnetic resonance imaging while performing a body image self-reflection task. Among both groups, whole-brain analysis revealed activations of a brain network related to perceptive and affective components of body image processing. ROI analysis showed a main effect of group in ACC as well as a group by condition interaction within bilateral EBA, bilateral FBA, right IPL, bilateral DLPFC, left amygdala and left MPFC. For the A-W group, simple effect analysis revealed stronger activations in Thin-Control compared to Fat-Control condition within regions related to perceptive (including bilateral EBA, bilateral FBA, right IPL) and affective components of body image processing (including bilateral DLPFC, left amygdala), as well as self-reference (left MPFC). The O-W group only showed stronger activations in Fat-Control than in Thin-Control condition within regions related to the perceptive component of body image processing (including left EBA and left FBA). Path analysis showed that in the Fat-Thin contrast, body dissatisfaction completely mediated the group difference in brain response in left amygdala across the whole sample. Our data are the first to demonstrate differences in brain response to body pictures between average weight and overweight young females involved in a body image self-reflection task. These results provide insights for understanding the vulnerability to body image distress among overweight or obese young females.

  6. Image Processing for Teaching: Transforming a Scientific Research Tool into an Educational Technology.

    ERIC Educational Resources Information Center

    Greenberg, Richard

    1998-01-01

    Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…

  7. Component-based target recognition inspired by human vision

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2009-05-01

    In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.

  8. Desktop publishing and medical imaging: paper as hardcopy medium for digital images.

    PubMed

    Denslow, S

    1994-08-01

    Desktop-publishing software and hardware has progressed to the point that many widely used word-processing programs are capable of printing high-quality digital images with many shades of gray from black to white. Accordingly, it should be relatively easy to print digital medical images on paper for reports, instructional materials, and in research notes. Components were assembled that were necessary for extracting image data from medical imaging devices and converting the data to a form usable by word-processing software. A system incorporating these components was implemented in a medical setting and has been operating for 18 months. The use of this system by medical staff has been monitored.

  9. Geometric shapes inversion method of space targets by ISAR image segmentation

    NASA Astrophysics Data System (ADS)

    Huo, Chao-ying; Xing, Xiao-yu; Yin, Hong-cheng; Li, Chen-guang; Zeng, Xiang-yun; Xu, Gao-gui

    2017-11-01

    The geometric shape of target is an effective characteristic in the process of space targets recognition. This paper proposed a method of shape inversion of space target based on components segmentation from ISAR image. The Radon transformation, Hough transformation, K-means clustering, triangulation will be introduced into ISAR image processing. Firstly, we use Radon transformation and edge detection to extract space target's main body spindle and solar panel spindle from ISAR image. Then the targets' main body, solar panel, rectangular and circular antenna are segmented from ISAR image based on image detection theory. Finally, the sizes of every structural component are computed. The effectiveness of this method is verified using typical targets' simulation data.

  10. High dynamic range algorithm based on HSI color space

    NASA Astrophysics Data System (ADS)

    Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming

    2014-10-01

    This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.

  11. 3D Texture Features Mining for MRI Brain Tumor Identification

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra

    2014-03-01

    Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.

  12. Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.

    PubMed

    Orbán, Levente L; Chartier, Sylvain

    2015-01-01

    Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.

  13. Nonlinear spike-and-slab sparse coding for interpretable image encoding.

    PubMed

    Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.

  14. Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding

    PubMed Central

    Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947

  15. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  16. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  17. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  18. Image processing for flight crew enhanced situation awareness

    NASA Technical Reports Server (NTRS)

    Roberts, Barry

    1993-01-01

    This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

  19. Computational analysis of Pelton bucket tip erosion using digital image processing

    NASA Astrophysics Data System (ADS)

    Shrestha, Bim Prasad; Gautam, Bijaya; Bajracharya, Tri Ratna

    2008-03-01

    Erosion of hydro turbine components through sand laden river is one of the biggest problems in Himalayas. Even with sediment trapping systems, complete removal of fine sediment from water is impossible and uneconomical; hence most of the turbine components in Himalayan Rivers are exposed to sand laden water and subject to erode. Pelton bucket which are being wildly used in different hydropower generation plant undergoes erosion on the continuous presence of sand particles in water. The subsequent erosion causes increase in splitter thickness, which is supposed to be theoretically zero. This increase in splitter thickness gives rise to back hitting of water followed by decrease in turbine efficiency. This paper describes the process of measurement of sharp edges like bucket tip using digital image processing. Image of each bucket is captured and allowed to run for 72 hours; sand concentration in water hitting the bucket is closely controlled and monitored. Later, the image of the test bucket is taken in the same condition. The process is repeated for 10 times. In this paper digital image processing which encompasses processes that performs image enhancement in both spatial and frequency domain. In addition, the processes that extract attributes from images, up to and including the measurement of splitter's tip. Processing of image has been done in MATLAB 6.5 platform. The result shows that quantitative measurement of edge erosion of sharp edges could accurately be detected and the erosion profile could be generated using image processing technique.

  20. Multi-focus image fusion algorithm using NSCT and MPCNN

    NASA Astrophysics Data System (ADS)

    Liu, Kang; Wang, Lianli

    2018-04-01

    Based on nonsubsampled contourlet transform (NSCT) and modified pulse coupled neural network (MPCNN), the paper proposes an effective method of image fusion. Firstly, the paper decomposes the source image into the low-frequency components and high-frequency components using NSCT, and then processes the low-frequency components by regional statistical fusion rules. For high-frequency components, the paper calculates the spatial frequency (SF), which is input into MPCNN model to get relevant coefficients according to the fire-mapping image of MPCNN. At last, the paper restructures the final image by inverse transformation of low-frequency and high-frequency components. Compared with the wavelet transformation (WT) and the traditional NSCT algorithm, experimental results indicate that the method proposed in this paper achieves an improvement both in human visual perception and objective evaluation. It indicates that the method is effective, practical and good performance.

  1. Detecting defective electrical components in heterogeneous infra-red images by spatial control charts

    NASA Astrophysics Data System (ADS)

    Jamshidieini, Bahman; Fazaee, Reza

    2016-05-01

    Distribution network components connect machines and other loads to electrical sources. If resistance or current of any component is more than specified range, its temperature may exceed the operational limit which can cause major problems. Therefore, these defects should be found and eliminated according to their severity. Although infra-red cameras have been used for inspection of electrical components, maintenance prioritization of distribution cubicles is mostly based on personal perception and lack of training data prevents engineers from developing image processing methods. New research on the spatial control chart encouraged us to use statistical approaches instead of the pattern recognition for the image processing. In the present study, a new scanning pattern which can tolerate heavy autocorrelation among adjacent pixels within infra-red image was developed and for the first time combination of kernel smoothing, spatial control charts and local robust regression were used for finding defects within heterogeneous infra-red images of old distribution cubicles. This method does not need training data and this advantage is crucially important when the training data is not available.

  2. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  3. Image and Processing Models for Satellite Detection in Images Acquired by Space-based Surveillance-of-Space Sensors

    DTIC Science & Technology

    2010-09-01

    external sources ‘L1’ like zodiacal light (or diffuse nebula ) or stray light ‘L2’ and these components change with the telescope pointing. Bk (T,t...Astronomical scene background (zodiacal light, diffuse nebulae , etc.). L2(P A(tk), t): Image background component caused by stray light. MS

  4. An Image Retrieval and Processing Expert System for the World Wide Web

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ricardo; Rondon, Angelica; Bruno, Maria I.; Vasquez, Ramon

    1998-01-01

    This paper presents a system that is being developed in the Laboratory of Applied Remote Sensing and Image Processing at the University of P.R. at Mayaguez. It describes the components that constitute its architecture. The main elements are: a Data Warehouse, an Image Processing Engine, and an Expert System. Together, they provide a complete solution to researchers from different fields that make use of images in their investigations. Also, since it is available to the World Wide Web, it provides remote access and processing of images.

  5. Single Channel Quantum Color Image Encryption Algorithm Based on HSI Model and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong

    2018-01-01

    In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.

  6. Bit-level plane image encryption based on coupled map lattice with time-varying delay

    NASA Astrophysics Data System (ADS)

    Lv, Xiupin; Liao, Xiaofeng; Yang, Bo

    2018-04-01

    Most of the existing image encryption algorithms had two basic properties: confusion and diffusion in a pixel-level plane based on various chaotic systems. Actually, permutation in a pixel-level plane could not change the statistical characteristics of an image, and many of the existing color image encryption schemes utilized the same method to encrypt R, G and B components, which means that the three color components of a color image are processed three times independently. Additionally, dynamical performance of a single chaotic system degrades greatly with finite precisions in computer simulations. In this paper, a novel coupled map lattice with time-varying delay therefore is applied in color images bit-level plane encryption to solve the above issues. Spatiotemporal chaotic system with both much longer period in digitalization and much excellent performances in cryptography is recommended. Time-varying delay embedded in coupled map lattice enhances dynamical behaviors of the system. Bit-level plane image encryption algorithm has greatly reduced the statistical characteristics of an image through the scrambling processing. The R, G and B components cross and mix with one another, which reduces the correlation among the three components. Finally, simulations are carried out and all the experimental results illustrate that the proposed image encryption algorithm is highly secure, and at the same time, also demonstrates superior performance.

  7. Architectures for single-chip image computing

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  8. The application of digital techniques to the analysis of metallurgical experiments

    NASA Technical Reports Server (NTRS)

    Rathz, T. J.

    1977-01-01

    The application of a specific digital computer system (known as the Image Data Processing System) to the analysis of three NASA-sponsored metallurgical experiments is discussed in some detail. The basic hardware and software components of the Image Data Processing System are presented. Many figures are presented in the discussion of each experimental analysis in an attempt to show the accuracy and speed that the Image Data Processing System affords in analyzing photographic images dealing with metallurgy, and in particular with material processing.

  9. The PICWidget

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2007-01-01

    The Plug-in Image Component Widget (PICWidget) is a software component for building digital imaging applications. The component is part of a methodology described in GIS Methodology for Planning Planetary-Rover Operations (NPO-41812), which appears elsewhere in this issue of NASA Tech Briefs. Planetary rover missions return a large number and wide variety of image data products that vary in complexity in many ways. Supported by a powerful, flexible image-data-processing pipeline, the PICWidget can process and render many types of imagery, including (but not limited to) thumbnail, subframed, downsampled, stereoscopic, and mosaic images; images coregistred with orbital data; and synthetic red/green/blue images. The PICWidget is capable of efficiently rendering images from data representing many more pixels than are available at a computer workstation where the images are to be displayed. The PICWidget is implemented as an Eclipse plug-in using the Standard Widget Toolkit, which provides a straightforward interface for re-use of the PICWidget in any number of application programs built upon the Eclipse application framework. Because the PICWidget is tile-based and performs aggressive tile caching, it has flexibility to perform faster or slower, depending whether more or less memory is available.

  10. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  11. Study of ionospheric anomalies due to impact of typhoon using Principal Component Analysis and image processing

    NASA Astrophysics Data System (ADS)

    LIN, JYH-WOEI

    2012-08-01

    Principal Component Analysis (PCA) and image processing are used to determine Total Electron Content (TEC) anomalies in the F-layer of the ionosphere relating to Typhoon Nakri for 29 May, 2008 (UTC). PCA and image processing are applied to the global ionospheric map (GIM) with transforms conducted for the time period 12:00-14:00 UT on 29 May, 2008 when the wind was most intense. Results show that at a height of approximately 150-200 km the TEC anomaly is highly localized; however, it becomes more intense and widespread with height. Potential causes of these results are discussed with emphasis given to acoustic gravity waves caused by wind force.

  12. Blind source separation of ex-vivo aorta tissue multispectral images

    PubMed Central

    Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson

    2015-01-01

    Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366

  13. Fusion of visible and near-infrared images based on luminance estimation by weighted luminance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao

    2018-01-01

    In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.

  14. Apparatus and method for temperature mapping a turbine component in a high temperature combustion environment

    DOEpatents

    Baleine, Erwan; Sheldon, Danny M

    2014-06-10

    Method and system for calibrating a thermal radiance map of a turbine component in a combustion environment. At least one spot (18) of material is disposed on a surface of the component. An infrared (IR) imager (14) is arranged so that the spot is within a field of view of the imager to acquire imaging data of the spot. A processor (30) is configured to process the imaging data to generate a sequence of images as a temperature of the combustion environment is increased. A monitor (42, 44) may be coupled to the processor to monitor the sequence of images of to determine an occurrence of a physical change of the spot as the temperature is increased. A calibration module (46) may be configured to assign a first temperature value to the surface of the turbine component when the occurrence of the physical change of the spot is determined.

  15. Design and analysis of x-ray vision systems for high-speed detection of foreign body contamination in food

    NASA Astrophysics Data System (ADS)

    Graves, Mark; Smith, Alexander; Batchelor, Bruce G.; Palmer, Stephen C.

    1994-10-01

    In the food industry there is an ever increasing need to control and monitor food quality. In recent years fully automated x-ray inspection systems have been used to detect food on-line for foreign body contamination. These systems involve a complex integration of x- ray imaging components with state of the art high speed image processing. The quality of the x-ray image obtained by such systems is very poor compared with images obtained from other inspection processes, this makes reliable detection of very small, low contrast defects extremely difficult. It is therefore extremely important to optimize the x-ray imaging components to give the very best image possible. In this paper we present a method of analyzing the x-ray imaging system in order to consider the contrast obtained when viewing small defects.

  16. Automatic recognition of light source from color negative films using sorting classification techniques

    NASA Astrophysics Data System (ADS)

    Sanger, Demas S.; Haneishi, Hideaki; Miyake, Yoichi

    1995-08-01

    This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.

  17. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  18. The N170 component is sensitive to face-like stimuli: a study of Chinese Peking opera makeup.

    PubMed

    Liu, Tiantian; Mu, Shoukuan; He, Huamin; Zhang, Lingcong; Fan, Cong; Ren, Jie; Zhang, Mingming; He, Weiqi; Luo, Wenbo

    2016-12-01

    The N170 component is considered a neural marker of face-sensitive processing. In the present study, the face-sensitive N170 component of event-related potentials (ERPs) was investigated with a modified oddball paradigm using a natural face (the standard stimulus), human- and animal-like makeup stimuli, scrambled control images that mixed human- and animal-like makeup pieces, and a grey control image. Nineteen participants were instructed to respond within 1000 ms by pressing the ' F ' or ' J ' key in response to the standard or deviant stimuli, respectively. We simultaneously recorded ERPs, response accuracy, and reaction times. The behavioral results showed that the main effect of stimulus type was significant for reaction time, whereas there were no significant differences in response accuracies among stimulus types. In relation to the ERPs, N170 amplitudes elicited by human-like makeup stimuli, animal-like makeup stimuli, scrambled control images, and a grey control image progressively decreased. A right hemisphere advantage was observed in the N170 amplitudes for human-like makeup stimuli, animal-like makeup stimuli, and scrambled control images but not for grey control image. These results indicate that the N170 component is sensitive to face-like stimuli and reflect configural processing in face recognition.

  19. Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.

    PubMed

    Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo

    2003-04-01

    An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.

  20. Near-infrared spectroscopic tissue imaging for medical applications

    DOEpatents

    Demos,; Stavros, Staggs [Livermore, CA; Michael, C [Tracy, CA

    2006-03-21

    Near infrared imaging using elastic light scattering and tissue autofluorescence are explored for medical applications. The approach involves imaging using cross-polarized elastic light scattering and tissue autofluorescence in the Near Infra-Red (NIR) coupled with image processing and inter-image operations to differentiate human tissue components.

  1. Near-infrared spectroscopic tissue imaging for medical applications

    DOEpatents

    Demos, Stavros [Livermore, CA; Staggs, Michael C [Tracy, CA

    2006-12-12

    Near infrared imaging using elastic light scattering and tissue autofluorescence are explored for medical applications. The approach involves imaging using cross-polarized elastic light scattering and tissue autofluorescence in the Near Infra-Red (NIR) coupled with image processing and inter-image operations to differentiate human tissue components.

  2. Quantitative characterization of the carbon/carbon composites components based on video of polarized light microscope.

    PubMed

    Li, Yixian; Qi, Lehua; Song, Yongshan; Chao, Xujiang

    2017-06-01

    The components of carbon/carbon (C/C) composites have significant influence on the thermal and mechanical properties, so a quantitative characterization of component is necessary to study the microstructure of C/C composites, and further to improve the macroscopic properties of C/C composites. Considering the extinction crosses of the pyrocarbon matrix have significant moving features, the polarized light microscope (PLM) video is used to characterize C/C composites quantitatively because it contains sufficiently dynamic and structure information. Then the optical flow method is introduced to compute the optical flow field between the adjacent frames, and segment the components of C/C composites from PLM image by image processing. Meanwhile the matrix with different textures is re-segmented by the length difference of motion vectors, and then the component fraction of each component and extinction angle of pyrocarbon matrix are calculated directly. Finally, the C/C composites are successfully characterized from three aspects of carbon fiber, pyrocarbon, and pores by a series of image processing operators based on PLM video, and the errors of component fractions are less than 15%. © 2017 Wiley Periodicals, Inc.

  3. Switching non-local vector median filter

    NASA Astrophysics Data System (ADS)

    Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji

    2016-04-01

    This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.

  4. A Electro-Optical Image Algebra Processing System for Automatic Target Recognition

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick Cyrus

    The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.

  5. Componential distribution analysis of food using near infrared ray image

    NASA Astrophysics Data System (ADS)

    Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie

    2008-11-01

    The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.

  6. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  7. Skin age testing criteria: characterization of human skin structures by 500 MHz MRI multiple contrast and image processing.

    PubMed

    Sharma, Rakesh

    2010-07-21

    Ex vivo magnetic resonance microimaging (MRM) image characteristics are reported in human skin samples in different age groups. Human excised skin samples were imaged using a custom coil placed inside a 500 MHz NMR imager for high-resolution microimaging. Skin MRI images were processed for characterization of different skin structures. Contiguous cross-sectional T1-weighted 3D spin echo MRI, T2-weighted 3D spin echo MRI and proton density images were compared with skin histopathology and NMR peaks. In all skin specimens, epidermis and dermis thickening and hair follicle size were measured using MRM. Optimized parameters TE and TR and multicontrast enhancement generated better MRI visibility of different skin components. Within high MR signal regions near to the custom coil, MRI images with short echo time were comparable with digitized histological sections for skin structures of the epidermis, dermis and hair follicles in 6 (67%) of the nine specimens. Skin % tissue composition, measurement of the epidermis, dermis, sebaceous gland and hair follicle size, and skin NMR peaks were signatures of skin type. The image processing determined the dimensionality of skin tissue components and skin typing. The ex vivo MRI images and histopathology of the skin may be used to measure the skin structure and skin NMR peaks with image processing may be a tool for determining skin typing and skin composition.

  8. Skin age testing criteria: characterization of human skin structures by 500 MHz MRI multiple contrast and image processing

    NASA Astrophysics Data System (ADS)

    Sharma, Rakesh

    2010-07-01

    Ex vivo magnetic resonance microimaging (MRM) image characteristics are reported in human skin samples in different age groups. Human excised skin samples were imaged using a custom coil placed inside a 500 MHz NMR imager for high-resolution microimaging. Skin MRI images were processed for characterization of different skin structures. Contiguous cross-sectional T1-weighted 3D spin echo MRI, T2-weighted 3D spin echo MRI and proton density images were compared with skin histopathology and NMR peaks. In all skin specimens, epidermis and dermis thickening and hair follicle size were measured using MRM. Optimized parameters TE and TR and multicontrast enhancement generated better MRI visibility of different skin components. Within high MR signal regions near to the custom coil, MRI images with short echo time were comparable with digitized histological sections for skin structures of the epidermis, dermis and hair follicles in 6 (67%) of the nine specimens. Skin % tissue composition, measurement of the epidermis, dermis, sebaceous gland and hair follicle size, and skin NMR peaks were signatures of skin type. The image processing determined the dimensionality of skin tissue components and skin typing. The ex vivo MRI images and histopathology of the skin may be used to measure the skin structure and skin NMR peaks with image processing may be a tool for determining skin typing and skin composition.

  9. Radial line method for rear-view mirror distortion detection

    NASA Astrophysics Data System (ADS)

    Rahmah, Fitri; Kusumawardhani, Apriani; Setijono, Heru; Hatta, Agus M.; Irwansyah, .

    2015-01-01

    An image of the object can be distorted due to a defect in a mirror. A rear-view mirror is an important component for the vehicle safety. One of standard parameters of the rear-view mirror is a distortion factor. This paper presents a radial line method for distortion detection of the rear-view mirror. The rear-view mirror was tested for the distortion detection by using a system consisting of a webcam sensor and an image-processing unit. In the image-processing unit, the captured image from the webcam were pre-processed by using smoothing and sharpening techniques and then a radial line method was used to define the distortion factor. It was demonstrated successfully that the radial line method could be used to define the distortion factor. This detection system is useful to be implemented such as in Indonesian's automotive component industry while the manual inspection still be used.

  10. In-Process Thermal Imaging of the Electron Beam Freeform Fabrication Process

    NASA Technical Reports Server (NTRS)

    Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.

    2016-01-01

    Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.

  11. High-performance image processing architecture

    NASA Astrophysics Data System (ADS)

    Coffield, Patrick C.

    1992-04-01

    The proposed architecture is a logical design specifically for image processing and other related computations. The design is a hybrid electro-optical concept consisting of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined by an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how elegantly it handles the natural decomposition of algebraic functions into spatially distributed, point-wise operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The logical architecture may take any number of physical forms. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control all the arithmetic and logic operations of the image algebra's generalized matrix product. This is the most powerful fundamental formulation in the algebra, thus allowing a wide range of applications.

  12. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  13. Image thumbnails that represent blur and noise.

    PubMed

    Samadani, Ramin; Mauer, Timothy A; Berfanger, David M; Clark, James H

    2010-02-01

    The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and low-quality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating component of the new thumbnails should be based on testing with the particular image mix expected for the application.

  14. Demodulation techniques for the amplitude modulated laser imager

    NASA Astrophysics Data System (ADS)

    Mullen, Linda; Laux, Alan; Cochenour, Brandon; Zege, Eleonora P.; Katsev, Iosif L.; Prikhach, Alexander S.

    2007-10-01

    A new technique has been found that uses in-phase and quadrature phase (I/Q) demodulation to optimize the images produced with an amplitude-modulated laser imaging system. An I/Q demodulator was used to collect the I/Q components of the received modulation envelope. It was discovered that by adjusting the local oscillator phase and the modulation frequency, the backscatter and target signals can be analyzed separately via the I/Q components. This new approach enhances image contrast beyond what was achieved with a previous design that processed only the composite magnitude information.

  15. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  16. Separated Component-Based Restoration of Speckled SAR Images

    DTIC Science & Technology

    2014-01-01

    One of the simplest approaches for speckle noise reduction is known as multi-look processing. It involves non-coherently summing the independent...image is assumed to be piecewise smooth [21], [22], [23]. It has been shown that TV regular- ization often yields images with the stair -casing effect...as a function f , is to be decomposed into a sum of two components f = u+ v, where u represents the cartoon or geometric (i.e. piecewise smooth

  17. Image retrieval and processing system version 2.0 development work

    NASA Technical Reports Server (NTRS)

    Slavney, Susan H.; Guinness, Edward A.

    1991-01-01

    The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.

  18. WE-G-209-03: PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kemp, B.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  19. WE-G-209-02: CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kofler, J.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  20. WE-G-209-04: MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pooley, R.

    2016-06-15

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  1. 21 CFR 892.1715 - Full-field digital mammography system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...

  2. 21 CFR 892.1715 - Full-field digital mammography system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...

  3. 21 CFR 892.1715 - Full-field digital mammography system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...

  4. 21 CFR 892.1715 - Full-field digital mammography system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...

  5. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  6. Medical image informatics infrastructure design and applications.

    PubMed

    Huang, H K; Wong, S T; Pietka, E

    1997-01-01

    Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.

  7. Apodization of spurs in radar receivers using multi-channel processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin W.; Bickel, Douglas L.

    The various technologies presented herein relate to identification and mitigation of spurious energies or signals (aka "spurs") in radar imaging. Spurious energy in received radar data can be a consequence of non-ideal component and circuit behavior. Such behavior can result from I/Q imbalance, nonlinear component behavior, additive interference (e.g. cross-talk, etc.), etc. The manifestation of the spurious energy in a radar image (e.g., a range-Doppler map) can be influenced by appropriate pulse-to-pulse phase modulation. Comparing multiple images which have been processed using the same data but of different signal paths and modulations enables identification of undesired spurs, with subsequent croppingmore » or apodization of the undesired spurs from a radar image. Spurs can be identified by comparison with a threshold energy. Removal of an undesired spur enables enhanced identification of true targets in a radar image.« less

  8. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  9. Autofluorescence detection and imaging of bladder cancer realized through a cystoscope

    DOEpatents

    Demos, Stavros G [Livermore, CA; deVere White, Ralph W [Sacramento, CA

    2007-08-14

    Near infrared imaging using elastic light scattering and tissue autofluorescence and utilizing interior examination techniques and equipment are explored for medical applications. The approach involves imaging using cross-polarized elastic light scattering and/or tissue autofluorescence in the Near Infra-Red (NIR) coupled with image processing and inter-image operations to differentiate human tissue components.

  10. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  11. WE-G-209-00: Identifying Image Artifacts, Their Causes, and How to Fix Them

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  12. Image fusion

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.

  13. Single image super-resolution via an iterative reproducing kernel Hilbert space method.

    PubMed

    Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu

    2016-11-01

    Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.

  14. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  15. Detection of fuze defects by image-processing methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, M.J.

    1988-03-01

    This paper describes experimental studies of the detection of mechanical defects by the application of computer-processing methods to real-time radiographic images of fuze assemblies. The experimental results confirm that a new algorithm developed at Materials Research Laboratory has potential for the automatic inspection of these assemblies and of others that contain discrete components. The algorithm was applied to images that contain a range of grey levels and has been found to be tolerant to image variations encountered under simulated production conditions.

  16. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  17. Computing the scatter component of mammographic images.

    PubMed

    Highnam, R P; Brady, J M; Shepstone, B J

    1994-01-01

    The authors build upon a technical report (Tech. Report OUEL 2009/93, Engng. Sci., Oxford Uni., Oxford, UK, 1993) in which they proposed a model of the mammographic imaging process for which scattered radiation is a key degrading factor. Here, the authors propose a way of estimating the scatter component of the signal at any pixel within a mammographic image, and they use this estimate for model-based image enhancement. The first step is to extend the authors' previous model to divide breast tissue into "interesting" (fibrous/glandular/cancerous) tissue and fat. The scatter model is then based on the idea that the amount of scattered radiation reaching a point is related to the energy imparted to the surrounding neighbourhood. This complex relationship is approximated using published empirical data, and it varies with the size of the breast being imaged. The approximation is further complicated by needing to take account of extra-focal radiation and breast edge effects. The approximation takes the form of a weighting mask which is convolved with the total signal (primary and scatter) to give a value which is input to a "scatter function", approximated using three reference cases, and which returns a scatter estimate. Given a scatter estimate, the more important primary component can be calculated and used to create an image recognizable by a radiologist. The images resulting from this process are clearly enhanced, and model verification tests based on an estimate of the thickness of interesting tissue present proved to be very successful. A good scatter model opens the was for further processing to remove the effects of other degrading factors, such as beam hardening.

  18. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  19. Empirical projection-based basis-component decomposition method

    NASA Astrophysics Data System (ADS)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  20. Door detection in images based on learning by components

    NASA Astrophysics Data System (ADS)

    Cicirelli, Grazia; D'Orazio, Tiziana; Ancona, Nicola

    2001-10-01

    In this paper we present a vision-based technique for detecting targets of the environment which has to be reached by an autonomous mobile robot during its navigational task. The targets the robot has to reach are the doors of our office building. Color and shape information are used as identifying features for detecting principal components of the door. In fact in images the door can appear of different dimensions depending on the attitude of the robot with respect to the door, therefore detection of the door is performed by detecting its most significant components in the image. Positive and negative examples, in form of image patterns, are manually selected from real images for training two neural classifiers in order to recognize the single components. Each classifier has been realized by a feed-forward neural network with one hidden layer and sigmoid activation function. Moreover for selecting negative examples, relevant for the problem at hand, a bootstrap technique has been used during the training process. Finally the detecting system has been applied to several test real images for evaluating its performance.

  1. Noise reduction techniques for Bayer-matrix images

    NASA Astrophysics Data System (ADS)

    Kalevo, Ossi; Rantanen, Henry

    2002-04-01

    In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.

  2. Advanced imaging as a novel approach to the characterization of membranes for microfiltration applications

    NASA Astrophysics Data System (ADS)

    Marroquin, Milagro

    The primary objectives of my dissertation were to design, develop and implement novel confocal microscopy imaging protocols for the characterization of membranes and highlight opportunities to obtain reliable and cutting-edge information of microfiltration membrane morphology and fouling processes. After a comprehensive introduction and review of confocal microscopy in membrane applications (Chapter 1), the first part of this dissertation (Chapter 2) details my work on membrane morphology characterization by confocal laser scanning microscopy (CLSM) and the implementation of my newly developed CLSM cross-sectional imaging protocol. Depth-of-penetration limits were identified to be approximately 24 microns and 7-8 microns for mixed cellulose ester and polyethersulfone membranes, respectively, making it impossible to image about 70% of the membrane bulk. The development and implementation of my cross-sectional CLSM method enabled the imaging of the entire membrane cross-section. Porosities of symmetric and asymmetric membranes with nominal pore sizes in the range 0.65-8.0 microns were quantified at different depths and yielded porosity values in the 50-60% range. It is my hope and expectation that the characterization strategy developed in this part of the work will enable future studies of different membrane materials and applications by confocal microscopy. After demonstrating how cross-sectional CLSM could be used to fully characterize membrane morphologies and porosities, I applied it to the characterization of fouling occurring in polyethersulfone microfiltration membranes during the processing of solutions containing proteins and polysaccharides (Chapter 3). Through CLSM imaging, it was determined where proteins and polysaccharides deposit throughout polymeric microfiltration membranes when a fluid containing these materials is filtered. CLSM enabled evaluation of the location and extent of fouling by individual components (protein: casein and polysaccharide: dextran) within wet, asymmetric polyethersulfone microfiltration membranes. Information from filtration flux profiles and cross-sectional CLSM images of the membranes that processed single-component solutions and mixtures agreed with each other. Concentration profiles versus depth for each individual component present in the feed solution were developed from the analysis of the CLSM images at different levels of fouling for single-component solutions and mixtures. CLSM provided visual information that helped elucidate the role of each component on membrane fouling and provided a better understanding of how component interactions impact the fouling profiles. Finally, Chapter 4 extends the application of my cross-sectional CLSM imaging protocol to study the fouling of asymmetric polyethersulfone membranes during the microfiltration of protein, polyphenol, and polysaccharide mixtures to better understand the solute-solute and solute-membrane interactions leading to fouling in beverage clarification processes. Again, cross-sectional CLSM imaging provided information on the location and extent of fouling throughout the entire thickness of the PES membrane. Quantitative analysis of the cross-sectional CLSM images provided a measurement of the masses of foulants deposited throughout the membrane. Moreover, flux decline data collected for different mixtures of casein, tannic acid and beta-cyclodextrin were analyzed with standard fouling models to determine the fouling mechanisms at play when processing different combinations of foulants. Results from model analysis of flux data were compared with the quantitative visual analysis of the correspondent CLSM images. This approach, which couples visual and performance measurements, is expected to provide a better understanding of the causes of fouling that, in turn, is expected to aid in the design of new membranes with tailored structure or surface chemistry that prevents the deposition of the foulants in "prone to foul" regions. (Abstract shortened by UMI.)

  3. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  4. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing.

    PubMed

    Popescu, Dan; Ichim, Loretta; Stoican, Florin

    2017-02-23

    Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes-fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms.

  5. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing

    PubMed Central

    Popescu, Dan; Ichim, Loretta; Stoican, Florin

    2017-01-01

    Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes—fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms. PMID:28241479

  6. Assessment of cluster yield components by image analysis.

    PubMed

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  7. An integrtated approach to the use of Landsat TM data for gold exploration in west central Nevada

    NASA Technical Reports Server (NTRS)

    Mouat, D. A.; Myers, J. S.; Miller, N. L.

    1987-01-01

    This paper represents an integration of several Landsat TM image processing techniques with other data to discriminate the lithologies and associated areas of hydrothermal alteration in the vicinity of the Paradise Peak gold mine in west central Nevada. A microprocessor-based image processing system and an IDIMS system were used to analyze data from a 512 X 512 window of a Landsat-5 TM scene collected on June 30, 1984. Image processing techniques included simple band composites, band ratio composites, principal components composites, and baseline-based composites. These techniques were chosen based on their ability to discriminate the spectral characteristics of the products of hydrothermal alteration as well as of the associated regional lithologies. The simple band composite, ratio composite, two principal components composites, and the baseline-based composites separately can define the principal areas of alteration. Combined, they provide a very powerful exploration tool.

  8. Connected component analysis of review-SEM images for sub-10nm node process verification

    NASA Astrophysics Data System (ADS)

    Halder, Sandip; Leray, Philippe; Sah, Kaushik; Cross, Andrew; Parisi, Paolo

    2017-03-01

    Analysis of hotspots is becoming more and more critical as we scale from node to node. To define true process windows at sub-14 nm technology nodes, often defect inspections are being included to weed out design weak spots (often referred to as hotspots). Defect inspection sub 28 nm nodes is a two pass process. Defect locations identified by optical inspection tools need to be reviewed by review-SEM's to understand exactly which feature is failing in the region flagged by the optical tool. The images grabbed by the review-SEM tool are used for classification but rarely for quantification. The goal of this paper is to see if the thousands of review-SEM images which are existing can be used for quantification and further analysis. More specifically we address the SEM quantification problem with connected component analysis.

  9. WE-G-209-01: Digital Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schueler, B.

    Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less

  10. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy.

    PubMed

    Jesse, Stephen; Kalinin, Sergei V

    2009-02-25

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  11. Anorexia nervosa and body dysmorphic disorder are associated with abnormalities in processing visual information.

    PubMed

    Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J

    2015-07-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.

  12. Image-based 3D reconstruction and virtual environmental walk-through

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Fang, Lixiong; Luo, Ying

    2001-09-01

    We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.

  13. Form or function: Does focusing on body functionality protect women from body dissatisfaction when viewing media images?

    PubMed

    Mulgrew, Kate E; Tiggemann, Marika

    2018-01-01

    We examined whether shifting young women's ( N =322) attention toward functionality components of media-portrayed idealized images would protect against body dissatisfaction. Image type was manipulated via images of models in either an objectified body-as-object form or active body-as-process form; viewing focus was manipulated via questions about the appearance or functionality of the models. Social comparison was examined as a moderator. Negative outcomes were most pronounced within the process-related conditions (body-as-process images or functionality viewing focus) and for women who reported greater functionality comparison. Results suggest that functionality-based depictions, reflections, and comparisons may actually produce worse outcomes than those based on appearance.

  14. Method and apparatus for optical encoding with compressible imaging

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    2006-01-01

    The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.

  15. Shoulder Arthroplasty Imaging: What’s New

    PubMed Central

    Gregory, T.M

    2017-01-01

    Background: Shoulder arthroplasty, in its different forms (hemiarthroplasty, total shoulder arthroplasty and reverse total shoulder arthroplasty) has transformed the clinical outcomes of shoulder disorders. Improvement of general clinical outcome is the result of stronger adequacy of the treatment to the diagnosis, enhanced surgical techniques, specific implanted materials, and more accurate follow up. Imaging is an important tool in each step of these processes. Method: This article is a review article declining recent imaging processes for shoulder arthroplasty. Results: Shoulder imaging is important for shoulder arthroplasty pre-operative planning but also for post-operative monitoring of the prosthesis and this article has a focus on the validity of plain radiographs for detecting radiolucent line and on new Computed Tomography scan method established to eliminate the prosthesis metallic artefacts that obscure the component fixation visualisation. Conclusion: Number of shoulder arthroplasties implanted have grown up rapidly for the past decade, leading to an increase in the number of complications. In parallel, new imaging system have been established to monitor these complications, especially component loosening PMID:29152007

  16. Onboard data-processing architecture of the soft X-ray imager (SXI) on NeXT satellite

    NASA Astrophysics Data System (ADS)

    Ozaki, Masanobu; Dotani, Tadayasu; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Tsuru, Takeshi G.

    2004-09-01

    NeXT is the X-ray satellite proposed for the next Japanese space science mission. While the satellite total mass and the launching vehicle are similar to the prior satellite Astro-E2, the sensitivity is much improved; it requires all the components to be lighter and faster than previous architecture. This paper shows the data processing architecture of the X-ray CCD camera system SXI (Soft X-ray Imager), which is the top half of the WXI (Wide-band X-ray Imager) of the sensitivity in 0.2-80keV. The system is basically a variation of Astro-E2 XIS, but event extraction speed is much faster than it to fulfill the requirements coming from the large effective area and fast exposure period. At the same time, data transfer lines between components are redesigned in order to reduce the number and mass of the wire harnesses that limit the flexibility of the component distribution.

  17. Evaluation of the visual performance of image processing pipes: information value of subjective image attributes

    NASA Astrophysics Data System (ADS)

    Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.

    2010-01-01

    Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.

  18. A simplified and powerful image processing methods to separate Thai jasmine rice and sticky rice varieties

    NASA Astrophysics Data System (ADS)

    Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya

    2018-03-01

    A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.

  19. 77 FR 40082 - Certain Gaming and Entertainment Consoles, Related Software, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-06

    ... Commission remands for the ALJ to (1) apply the Commission's opinion in Certain Electronic Devices With Image Processing Systems, Components Thereof, and Associated Software, Inv. No. 337-TA-724, Comm'n Op. (Dec. 21...

  20. MARS spectral molecular imaging of lamb tissue: data collection and image analysis

    NASA Astrophysics Data System (ADS)

    Aamir, R.; Chernoglazov, A.; Bateman, C. J.; Butler, A. P. H.; Butler, P. H.; Anderson, N. G.; Bell, S. T.; Panta, R. K.; Healy, J. L.; Mohr, J. L.; Rajendran, K.; Walsh, M. F.; de Ruiter, N.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Brooke, L.; Abdul-Majid, S.; Clyne, M.; Glendenning, R.; Bones, P. J.; Billinghurst, M.; Bartneck, C.; Mandalika, H.; Grasset, R.; Schleich, N.; Scott, N.; Nik, S. J.; Opie, A.; Janmale, T.; Tang, D. N.; Kim, D.; Doesburg, R. M.; Zainon, R.; Ronaldson, J. P.; Cook, N. J.; Smithies, D. J.; Hodge, K.

    2014-02-01

    Spectral molecular imaging is a new imaging technique able to discriminate and quantify different components of tissue simultaneously at high spatial and high energy resolution. Our MARS scanner is an x-ray based small animal CT system designed to be used in the diagnostic energy range (20-140 keV). In this paper, we demonstrate the use of the MARS scanner, equipped with the Medipix3RX spectroscopic photon-processing detector, to discriminate fat, calcium, and water in tissue. We present data collected from a sample of lamb meat including bone as an illustrative example of human tissue imaging. The data is analyzed using our 3D Algebraic Reconstruction Algorithm (MARS-ART) and by material decomposition based on a constrained linear least squares algorithm. The results presented here clearly show the quantification of lipid-like, water-like and bone-like components of tissue. However, it is also clear to us that better algorithms could extract more information of clinical interest from our data. Because we are one of the first to present data from multi-energy photon-processing small animal CT systems, we make the raw, partial and fully processed data available with the intention that others can analyze it using their familiar routines. The raw, partially processed and fully processed data of lamb tissue along with the phantom calibration data can be found at http://hdl.handle.net/10092/8531.

  1. Earth Orbiter 1: Wideband Advanced Recorder and Processor (WARP)

    NASA Technical Reports Server (NTRS)

    Smith, Terry; Kessler, John

    1999-01-01

    An advanced on-board spacecraft data system component is presented. The component is computer-based and provides science data acquisition, processing, storage, and base-band transmission functions. Specifically, the component is a very high rate solid state recorder, serving as a pathfinder for achieving the data handling requirements of next-generation hyperspectral imaging missions.

  2. yourSky: Custom Sky-Image Mosaics via the Internet

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph

    2003-01-01

    yourSky (http://yourSky.jpl.nasa.gov) is a computer program that supplies custom astronomical image mosaics of sky regions specified by requesters using client computers connected to the Internet. [yourSky is an upgraded version of the software reported in Software for Generating Mosaics of Astronomical Images (NPO-21121), NASA Tech Briefs, Vol. 25, No. 4 (April 2001), page 16a.] A requester no longer has to engage in the tedious process of determining what subset of images is needed, nor even to know how the images are indexed in image archives. Instead, in response to a requester s specification of the size and location of the sky area, (and optionally of the desired set and type of data, resolution, coordinate system, projection, and image format), yourSky automatically retrieves the component image data from archives totaling tens of terabytes stored on computer tape and disk drives at multiple sites and assembles the component images into a mosaic image by use of a high-performance parallel code. yourSky runs on the server computer where the mosaics are assembled. Because yourSky includes a Web-interface component, no special client software is needed: ordinary Web browser software is sufficient.

  3. Pixel Perfect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.

    2005-09-01

    Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less

  4. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. PCA based clustering for brain tumor segmentation of T1w MRI images.

    PubMed

    Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay

    2017-03-01

    Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  7. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.

  8. Single sensor processing to obtain high resolution color component signals

    NASA Technical Reports Server (NTRS)

    Glenn, William E. (Inventor)

    2010-01-01

    A method for generating color video signals representative of color images of a scene includes the following steps: focusing light from the scene on an electronic image sensor via a filter having a tri-color filter pattern; producing, from outputs of the sensor, first and second relatively low resolution luminance signals; producing, from outputs of the sensor, a relatively high resolution luminance signal; producing, from a ratio of the relatively high resolution luminance signal to the first relatively low resolution luminance signal, a high band luminance component signal; producing, from outputs of the sensor, relatively low resolution color component signals; and combining each of the relatively low resolution color component signals with the high band luminance component signal to obtain relatively high resolution color component signals.

  9. Demodulation Processes in Auditory Perception.

    DTIC Science & Technology

    1992-08-15

    not provide a fused image that the listener can process binaurally . 5 A type of dichotic profile has been developed for this study in which the stimulus...the component frequencies between the two ears may allow the i listener to develop a better fused image to be processed i binaurally than in the...listener was seated facing a 3 monitor and computer keyboard (Radio Shack Color Computer II). Signals were presented binaurally via Sennheiser HD414SL

  10. Applications of independent component analysis in SAR images

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping

    2009-07-01

    The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.

  11. Design, fabrication and testing of hierarchical micro-optical structures and systems

    NASA Astrophysics Data System (ADS)

    Cannistra, Aaron Thomas

    Micro-optical systems are becoming essential components in imaging, sensing, communications, computing, and other applications. Optically based designs are replacing electronic, chemical and mechanical systems for a variety of reasons, including low power consumption, reduced maintenance, and faster operation. However, as the number and variety of applications increases, micro-optical system designs are becoming smaller, more integrated, and more complicated. Micro and nano-optical systems found in nature, such as the imaging systems found in many insects and crustaceans, can have highly integrated optical structures that vary in size by orders of magnitude. These systems incorporate components such as compound lenses, anti-reflective lens surface structuring, spectral filters, and polarization selective elements. For animals, these hybrid optical systems capable of many optical functions in a compact package have been repeatedly selected during the evolutionary process. Understanding the advantages of these designs gives motivation for synthetic optical systems with comparable functionality. However, alternative fabrication methods that deviate from conventional processes are needed to create such systems. Further complicating the issue, the resulting device geometry may not be readily compatible with existing measurement techniques. This dissertation explores several nontraditional fabrication techniques for optical components with hierarchical geometries and measurement techniques to evaluate performance of such components. A micro-transfer molding process is found to produce high-fidelity micro-optical structures and is used to fabricate a spectral filter on a curved surface. By using a custom measurement setup we demonstrate that the spectral filter retains functionality despite the nontraditional geometry. A compound lens is fabricated using similar fabrication techniques and the imaging performance is analyzed. A spray coating technique for photoresist application to curved surfaces combined with interference lithography is also investigated. Using this technique, we generate polarizers on curved surfaces and measure their performance. This work furthers an understanding of how combining multiple optical components affects the performance of each component, the final integrated devices, and leads towards realization of biomimetically inspired imaging systems.

  12. Utilization of a balanced steady state free precession signal model for improved fat/water decomposition.

    PubMed

    Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F

    2016-03-01

    Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.

  13. An effective approach for iris recognition using phase-based image matching.

    PubMed

    Miyazawa, Kazuyuki; Ito, Koichi; Aoki, Takafumi; Kobayashi, Koji; Nakajima, Hiroshi

    2008-10-01

    This paper presents an efficient algorithm for iris recognition using phase-based image matching--an image matching technique using phase components in 2D Discrete Fourier Transforms (DFTs) of given images. Experimental evaluation using CASIA iris image databases (versions 1.0 and 2.0) and Iris Challenge Evaluation (ICE) 2005 database clearly demonstrates that the use of phase components of iris images makes possible to achieve highly accurate iris recognition with a simple matching algorithm. This paper also discusses major implementation issues of our algorithm. In order to reduce the size of iris data and to prevent the visibility of iris images, we introduce the idea of 2D Fourier Phase Code (FPC) for representing iris information. The 2D FPC is particularly useful for implementing compact iris recognition devices using state-of-the-art Digital Signal Processing (DSP) technology.

  14. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  15. Application of digital image processing techniques to astronomical imagery, 1979

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1979-01-01

    Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.

  16. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  17. Automated assembly of fast-axis collimation (FAC) lenses for diode laser bar modules

    NASA Astrophysics Data System (ADS)

    Miesner, Jörn; Timmermann, Andre; Meinschien, Jens; Neumann, Bernhard; Wright, Steve; Tekin, Tolga; Schröder, Henning; Westphalen, Thomas; Frischkorn, Felix

    2009-02-01

    Laser diodes and diode laser bars are key components in high power semiconductor lasers and solid state laser systems. During manufacture, the assembly of the fast axis collimation (FAC) lens is a crucial step. The goal of our activities is to design an automated assembly system for high volume production. In this paper the results of an intermediate milestone will be reported: a demonstration system was designed, realized and tested to prove the feasibility of all of the system components and process features. The demonstration system consists of a high precision handling system, metrology for process feedback, a powerful digital image processing system and tooling for glue dispensing, UV curing and laser operation. The system components as well as their interaction with each other were tested in an experimental system in order to glean design knowledge for the fully automated assembly system. The adjustment of the FAC lens is performed by a series of predefined steps monitored by two cameras concurrently imaging the far field and the near field intensity distributions. Feedback from these cameras processed by a powerful and efficient image processing algorithm control a five axis precision motion system to optimize the fast axis collimation of the laser beam. Automated cementing of the FAC to the diode bar completes the process. The presentation will show the system concept, the algorithm of the adjustment as well as experimental results. A critical discussion of the results will close the talk.

  18. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  19. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  20. Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM

    NASA Astrophysics Data System (ADS)

    Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang

    2016-03-01

    A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.

  1. Multi exposure image fusion algorithm based on YCbCr space

    NASA Astrophysics Data System (ADS)

    Yang, T. T.; Fang, P. Y.

    2018-05-01

    To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.

  2. A soft kinetic data structure for lesion border detection.

    PubMed

    Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal

    2010-06-15

    The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.

  3. Comparing Independent Component Analysis with Principle Component Analysis in Detecting Alterations of Porphyry Copper Deposit (case Study: Ardestan Area, Central Iran)

    NASA Astrophysics Data System (ADS)

    Mahmoudishadi, S.; Malian, A.; Hosseinali, F.

    2017-09-01

    The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  4. Developing tools for digital radar image data evaluation

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.; Raggam, J.

    1986-01-01

    The refinement of radar image analysis methods has led to a need for a systems approach to radar image processing software. Developments stimulated through satellite radar are combined with standard image processing techniques to create a user environment to manipulate and analyze airborne and satellite radar images. One aim is to create radar products for the user from the original data to enhance the ease of understanding the contents. The results are called secondary image products and derive from the original digital images. Another aim is to support interactive SAR image analysis. Software methods permit use of a digital height model to create ortho images, synthetic images, stereo-ortho images, radar maps or color combinations of different component products. Efforts are ongoing to integrate individual tools into a combined hardware/software environment for interactive radar image analysis.

  5. Catalyst-layer ionomer imaging of fuel cells

    DOE PAGES

    Guetaz, Laure; Lopez-Haro, M.; Escribano, S.; ...

    2015-09-14

    Investigation of membrane/electrode assembly (MEA) microstructure has become an essential step to optimize the MEA components and manufacturing processes or to study the MEA degradation. For these investigations, transmission electron microscopy (TEM) is a tool of choice as it provides direct imaging of the different components. TEM is then widely used for analyzing the catalyst nanoparticles and their carbon support. However, the ionomer inside the electrode is more difficult to be imaged. The difficulties come from the fact that the ionomer forms an ultrathin layer surrounding the carbon particles and in addition, these two components, having similar density, present nomore » difference in contrast. In this paper, we show how the recent progresses in TEM techniques as spherical aberration (Cs) corrected HRTEM, electron tomography and X-EDS elemental mapping provide new possibilities for imaging this ionomer network and consequently to study its degradation.« less

  6. 76 FR 73677 - Investigations: Terminations, Modifications and Rulings: Certain Electronic Devices With Image...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-724] Investigations: Terminations, Modifications and Rulings: Certain Electronic Devices With Image Processing Systems, Components Thereof, and Associated Software AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby...

  7. Study on some useful Operators for Graph-theoretic Image Processing

    NASA Astrophysics Data System (ADS)

    Moghani, Ali; Nasiri, Parviz

    2010-11-01

    In this paper we describe a human perception based approach to pixel color segmentation which applied in color reconstruction by numerical method associated with graph-theoretic image processing algorithm typically in grayscale. Fuzzy sets defined on the Hue, Saturation and Value components of the HSV color space, provide a fuzzy logic model that aims to follow the human intuition of color classification.

  8. Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images — an Event-Related Potential Study using an Oddball Task

    NASA Astrophysics Data System (ADS)

    Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun

    2017-02-01

    The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the ‘comfort zones’ and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.

  9. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  10. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include themore » following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image registration.« less

  11. Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging

    PubMed Central

    Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528

  12. Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.

    PubMed

    Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-05-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.

  13. Label-free observation of tissues by high-speed stimulated Raman spectral microscopy and independent component analysis

    NASA Astrophysics Data System (ADS)

    Ozeki, Yasuyuki; Otsuka, Yoichi; Sato, Shuya; Hashimoto, Hiroyuki; Umemura, Wataru; Sumimura, Kazuhiko; Nishizawa, Norihiko; Fukui, Kiichi; Itoh, Kazuyoshi

    2013-02-01

    We have developed a video-rate stimulated Raman scattering (SRS) microscope with frame-by-frame wavenumber tunability. The system uses a 76-MHz picosecond Ti:sapphire laser and a subharmonically synchronized, 38-MHz Yb fiber laser. The Yb fiber laser pulses are spectrally sliced by a fast wavelength-tunable filter, which consists of a galvanometer scanner, a 4-f optical system and a reflective grating. The spectral resolution of the filter is ~ 3 cm-1. The wavenumber was scanned from 2800 to 3100 cm-1 with an arbitrary waveform synchronized to the frame trigger. For imaging, we introduced a 8-kHz resonant scanner and a galvanometer scanner. We were able to acquire SRS images of 500 x 480 pixels at a frame rate of 30.8 frames/s. Then these images were processed by principal component analysis followed by a modified algorithm of independent component analysis. This algorithm allows blind separation of constituents with overlapping Raman bands from SRS spectral images. The independent component (IC) spectra give spectroscopic information, and IC images can be used to produce pseudo-color images. We demonstrate various label-free imaging modalities such as 2D spectral imaging of the rat liver, two-color 3D imaging of a vessel in the rat liver, and spectral imaging of several sections of intestinal villi in the mouse. Various structures in the tissues such as lipid droplets, cytoplasm, fibrous texture, nucleus, and water-rich region were successfully visualized.

  14. Parallel ICA and its hardware implementation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Peterson, Gregory D.

    2004-04-01

    Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.

  15. Dimensionality and noise in energy selective x-ray imaging

    PubMed Central

    Alvarez, Robert E.

    2013-01-01

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging. Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator. Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 103. With the soft tissue component, it is 2.7 × 104. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB. Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems. PMID:24320442

  16. Visual enhancement of images of natural resources: Applications in geology

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Neto, G.; Araujo, E. O.; Mascarenhas, N. D. A.; Desouza, R. C. M.

    1980-01-01

    The principal components technique for use in multispectral scanner LANDSAT data processing results in optimum dimensionality reduction. A powerful tool for MSS IMAGE enhancement, the method provides a maximum impression of terrain ruggedness; this fact makes the technique well suited for geological analysis.

  17. Application of principal component analysis to multispectral imaging data for evaluation of pigmented skin lesions

    NASA Astrophysics Data System (ADS)

    Jakovels, Dainis; Lihacova, Ilze; Kuzmina, Ilona; Spigulis, Janis

    2013-11-01

    Non-invasive and fast primary diagnostics of pigmented skin lesions is required due to frequent incidence of skin cancer - melanoma. Diagnostic potential of principal component analysis (PCA) for distant skin melanoma recognition is discussed. Processing of the measured clinical multi-spectral images (31 melanomas and 94 nonmalignant pigmented lesions) in the wavelength range of 450-950 nm by means of PCA resulted in 87 % sensitivity and 78 % specificity for separation between malignant melanomas and pigmented nevi.

  18. Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts

    NASA Technical Reports Server (NTRS)

    Grau, David

    2012-01-01

    This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.

  19. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  20. Increased resting-state functional connectivity of visual- and cognitive-control brain networks after training in children with reading difficulties

    PubMed Central

    Horowitz-Kraus, Tzipi; DiFrancesco, Mark; Kay, Benjamin; Wang, Yingying; Holland, Scott K.

    2015-01-01

    The Reading Acceleration Program, a computerized reading-training program, increases activation in neural circuits related to reading. We examined the effect of the training on the functional connectivity between independent components related to visual processing, executive functions, attention, memory, and language during rest after the training. Children 8–12 years old with reading difficulties and typical readers participated in the study. Behavioral testing and functional magnetic resonance imaging were performed before and after the training. Imaging data were analyzed using an independent component analysis approach. After training, both reading groups showed increased single-word contextual reading and reading comprehension scores. Greater positive correlations between the visual-processing component and the executive functions, attention, memory, or language components were found after training in children with reading difficulties. Training-related increases in connectivity between the visual and attention components and between the visual and executive function components were positively correlated with increased word reading and reading comprehension, respectively. Our findings suggest that the effect of the Reading Acceleration Program on basic cognitive domains can be detected even in the absence of an ongoing reading task. PMID:26199874

  1. Increased resting-state functional connectivity of visual- and cognitive-control brain networks after training in children with reading difficulties.

    PubMed

    Horowitz-Kraus, Tzipi; DiFrancesco, Mark; Kay, Benjamin; Wang, Yingying; Holland, Scott K

    2015-01-01

    The Reading Acceleration Program, a computerized reading-training program, increases activation in neural circuits related to reading. We examined the effect of the training on the functional connectivity between independent components related to visual processing, executive functions, attention, memory, and language during rest after the training. Children 8-12 years old with reading difficulties and typical readers participated in the study. Behavioral testing and functional magnetic resonance imaging were performed before and after the training. Imaging data were analyzed using an independent component analysis approach. After training, both reading groups showed increased single-word contextual reading and reading comprehension scores. Greater positive correlations between the visual-processing component and the executive functions, attention, memory, or language components were found after training in children with reading difficulties. Training-related increases in connectivity between the visual and attention components and between the visual and executive function components were positively correlated with increased word reading and reading comprehension, respectively. Our findings suggest that the effect of the Reading Acceleration Program on basic cognitive domains can be detected even in the absence of an ongoing reading task.

  2. Research on assessment and improvement method of remote sensing image reconstruction

    NASA Astrophysics Data System (ADS)

    Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping

    2018-01-01

    Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.

  3. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  4. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  5. Conceptual design of an on-board optical processor with components

    NASA Technical Reports Server (NTRS)

    Walsh, J. R.; Shackelford, R. G.

    1977-01-01

    The specification of components for a spacecraft on-board optical processor was investigated. A space oriented application of optical data processing and the investigation of certain aspects of optical correlators were examined. The investigation confirmed that real-time optical processing has made significant advances over the past few years, but that there are still critical components which will require further development for use in an on-board optical processor. The devices evaluated were the coherent light valve, the readout optical modulator, the liquid crystal modulator, and the image forming light modulator.

  6. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  7. Identification of nodes and internodes of chopped biomass stems by Image analysis

    USDA-ARS?s Scientific Manuscript database

    Separating the morphological components of biomass leads to better handling, more efficient processing as well as value added product generation, as these components vary in their chemical composition and can be preferentially utilized. Nodes and internodes of biomass stems have distinct chemical co...

  8. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    NASA Astrophysics Data System (ADS)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  9. Image Tracing: An Analysis of Its Effectiveness in Children's Pictorial Discrimination Learning

    ERIC Educational Resources Information Center

    Levin, Joel R.; And Others

    1977-01-01

    A total of 45 fifth grade students were the subjects of an experiment offering support for a component of learning strategy (memory imagery). Various theoretical explanations of the image-tracing phenomenon are considered, including depth of processing, dual coding and frequency. (MS)

  10. Implementation and Testing of Low Cost Uav Platform for Orthophoto Imaging

    NASA Astrophysics Data System (ADS)

    Brucas, D.; Suziedelyte-Visockiene, J.; Ragauskas, U.; Berteska, E.; Rudinskas, D.

    2013-08-01

    Implementation of Unmanned Aerial Vehicles for civilian applications is rapidly increasing. Technologies which were expensive and available only for military use have recently spread on civilian market. There is a vast number of low cost open source components and systems for implementation on UAVs available. Using of low cost hobby and open source components ensures considerable decrease of UAV price, though in some cases compromising its reliability. In Space Science and Technology Institute (SSTI) in collaboration with Vilnius Gediminas Technical University (VGTU) researches have been performed in field of constructing and implementation of small UAVs composed of low cost open source components (and own developments). Most obvious and simple implementation of such UAVs - orthophoto imaging with data download and processing after the flight. The construction, implementation of UAVs, flight experience, data processing and data implementation will be further covered in the paper and presentation.

  11. Terahertz multistatic reflection imaging.

    PubMed

    Dorney, Timothy D; Symes, William W; Baraniuk, Richard G; Mittleman, Daniel M

    2002-07-01

    We describe a new imaging method using single-cycle pulses of terahertz (THz) radiation. This technique emulates the data collection and image processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a migration procedure to solve the inverse problem; this permits us to reconstruct the location, the shape, and the refractive index of targets. We show examples for both metallic and dielectric model targets, and we perform velocity analysis on dielectric targets to estimate the refractive indices of imaged components. These results broaden the capabilities of THz imaging systems and also demonstrate the viability of the THz system as a test bed for the exploration of new seismic processing methods.

  12. Development of AN All-Purpose Free Photogrammetric Tool

    NASA Astrophysics Data System (ADS)

    González-Aguilera, D.; López-Fernández, L.; Rodriguez-Gonzalvez, P.; Guerrero, D.; Hernandez-Lopez, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; Gaiani, M.

    2016-06-01

    Photogrammetry is currently facing some challenges and changes mainly related to automation, ubiquitous processing and variety of applications. Within an ISPRS Scientific Initiative a team of researchers from USAL, UCLM, FBK and UNIBO have developed an open photogrammetric tool, called GRAPHOS (inteGRAted PHOtogrammetric Suite). GRAPHOS allows to obtain dense and metric 3D point clouds from terrestrial and UAV images. It encloses robust photogrammetric and computer vision algorithms with the following aims: (i) increase automation, allowing to get dense 3D point clouds through a friendly and easy-to-use interface; (ii) increase flexibility, working with any type of images, scenarios and cameras; (iii) improve quality, guaranteeing high accuracy and resolution; (iv) preserve photogrammetric reliability and repeatability. Last but not least, GRAPHOS has also an educational component reinforced with some didactical explanations about algorithms and their performance. The developments were carried out at different levels: GUI realization, image pre-processing, photogrammetric processing with weight parameters, dataset creation and system evaluation. The paper will present in detail the developments of GRAPHOS with all its photogrammetric components and the evaluation analyses based on various image datasets. GRAPHOS is distributed for free for research and educational needs.

  13. ``Seeing'' electroencephalogram through the skull: imaging prefrontal cortex with fast optical signal

    NASA Astrophysics Data System (ADS)

    Medvedev, Andrei V.; Kainerstorfer, Jana M.; Borisov, Sergey V.; Gandjbakhche, Amir H.; Vanmeter, John

    2010-11-01

    Near-infrared spectroscopy is a novel imaging technique potentially sensitive to both brain hemodynamics (slow signal) and neuronal activity (fast optical signal, FOS). The big challenge of measuring FOS noninvasively lies in the presumably low signal-to-noise ratio. Thus, detectability of the FOS has been controversially discussed. We present reliable detection of FOS from 11 individuals concurrently with electroencephalogram (EEG) during a Go-NoGo task. Probes were placed bilaterally over prefrontal cortex. Independent component analysis (ICA) was used for artifact removal. Correlation coefficient in the best correlated FOS-EEG ICA pairs was highly significant (p < 10-8), and event-related optical signal (EROS) was found in all subjects. Several EROS components were similar to the event-related potential (ERP) components. The most robust ``optical N200'' at t = 225 ms coincided with the N200 ERP; both signals showed significant difference between targets and nontargets, and their timing correlated with subject's reaction time. Correlation between FOS and EEG even in single trials provides further evidence that at least some FOS components ``reflect'' electrical brain processes directly. The data provide evidence for the early involvement of prefrontal cortex in rapid object recognition. EROS is highly localized and can provide cost-effective imaging tools for cortical mapping of cognitive processes.

  14. “Seeing” electroencephalogram through the skull: imaging prefrontal cortex with fast optical signal

    PubMed Central

    Medvedev, Andrei V.; Kainerstorfer, Jana M.; Borisov, Sergey V.; Gandjbakhche, Amir H.; VanMeter, John

    2010-01-01

    Near-infrared spectroscopy is a novel imaging technique potentially sensitive to both brain hemodynamics (slow signal) and neuronal activity (fast optical signal, FOS). The big challenge of measuring FOS noninvasively lies in the presumably low signal-to-noise ratio. Thus, detectability of the FOS has been controversially discussed. We present reliable detection of FOS from 11 individuals concurrently with electroencephalogram (EEG) during a Go-NoGo task. Probes were placed bilaterally over prefrontal cortex. Independent component analysis (ICA) was used for artifact removal. Correlation coefficient in the best correlated FOS–EEG ICA pairs was highly significant (p < 10−8), and event-related optical signal (EROS) was found in all subjects. Several EROS components were similar to the event-related potential (ERP) components. The most robust “optical N200” at t = 225 ms coincided with the N200 ERP; both signals showed significant difference between targets and nontargets, and their timing correlated with subject’s reaction time. Correlation between FOS and EEG even in single trials provides further evidence that at least some FOS components “reflect” electrical brain processes directly. The data provide evidence for the early involvement of prefrontal cortex in rapid object recognition. EROS is highly localized and can provide cost-effective imaging tools for cortical mapping of cognitive processes. PMID:21198150

  15. The Dark Energy Survey Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Morganson, E.; Gruendl, R. A.; Menanteau, F.; Carrasco Kind, M.; Chen, Y.-C.; Daues, G.; Drlica-Wagner, A.; Friedel, D. N.; Gower, M.; Johnson, M. W. G.; Johnson, M. D.; Kessler, R.; Paz-Chinchón, F.; Petravick, D.; Pond, C.; Yanny, B.; Allam, S.; Armstrong, R.; Barkhouse, W.; Bechtol, K.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Buckley-Geer, E.; Covarrubias, R.; Desai, S.; Diehl, H. T.; Goldstein, D. A.; Gruen, D.; Li, T. S.; Lin, H.; Marriner, J.; Mohr, J. J.; Neilsen, E.; Ngeow, C.-C.; Paech, K.; Rykoff, E. S.; Sako, M.; Sevilla-Noarbe, I.; Sheldon, E.; Sobreira, F.; Tucker, D. L.; Wester, W.; DES Collaboration

    2018-07-01

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a ∼5000 deg2 survey of the southern sky in five optical bands (g, r, i, z, Y) to a depth of ∼24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g, r, i, z) over ∼27 deg2. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  16. Detection of hydroxyapatite in calcified cardiovascular tissues.

    PubMed

    Lee, Jae Sam; Morrisett, Joel D; Tung, Ching-Hsuan

    2012-10-01

    The objective of this study is to develop a method for selective detection of the calcific (hydroxyapatite) component in human aortic smooth muscle cells in vitro and in calcified cardiovascular tissues ex vivo. This method uses a novel optical molecular imaging contrast dye, Cy-HABP-19, to target calcified cells and tissues. A peptide that mimics the binding affinity of osteocalcin was used to label hydroxyapatite in vitro and ex vivo. Morphological changes in vascular smooth muscle cells were evaluated at an early stage of the mineralization process induced by extrinsic stimuli, osteogenic factors and a magnetic suspension cell culture. Hydroxyapatite components were detected in monolayers of these cells in the presence of osteogenic factors and a magnetic suspension environment. Atherosclerotic plaque contains multiple components including lipidic, fibrotic, thrombotic, and calcific materials. Using optical imaging and the Cy-HABP-19 molecular imaging probe, we demonstrated that hydroxyapatite components could be selectively distinguished from various calcium salts in human aortic smooth muscle cells in vitro and in calcified cardiovascular tissues, carotid endarterectomy samples and aortic valves, ex vivo. Hydroxyapatite deposits in cardiovascular tissues were selectively detected in the early stage of the calcification process using our Cy-HABP-19 probe. This new probe makes it possible to study the earliest events associated with vascular hydroxyapatite deposition at the cellular and molecular levels. This target-selective molecular imaging probe approach holds high potential for revealing early pathophysiological changes, leading to progression, regression, or stabilization of cardiovascular diseases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Detection of Hydroxyapatite in Calcified Cardiovascular Tissues

    PubMed Central

    Lee, Jae Sam; Morrisett, Joel D.; Tung, Ching-Hsuan

    2012-01-01

    Objective The objective of this study is to develop a method for selective detection of the calcific (hydroxyapatite) component in human aortic smooth muscle cells in vitro and in calcified cardiovascular tissues ex vivo. This method uses a novel optical molecular imaging contrast dye, Cy-HABP-19, to target calcified cells and tissues. Methods A peptide that mimics the binding affinity of osteocalcin was used to label hydroxyapatite in vitro and ex vivo. Morphological changes in vascular smooth muscle cells were evaluated at an early stage of the mineralization process induced by extrinsic stimuli, osteogenic factors and a magnetic suspension cell culture. Hydroxyapatite components were detected in monolayers of these cells in the presence of osteogenic factors and a magnetic suspension environment. Results Atherosclerotic plaque contains multiple components including lipidic, fibrotic, thrombotic, and calcific materials. Using optical imaging and the Cy-HABP-19 molecular imaging probe, we demonstrated that hydroxyapatite components could be selectively distinguished from various calcium salts in human aortic smooth muscle cells in vitro and in calcified cardiovascular tissues, carotid endarterectomy samples and aortic valves, ex vivo. Conclusion Hydroxyapatite deposits in cardiovascular tissues were selectively detected in the early stage of the calcification process using our Cy-HABP-19 probe. This new probe makes it possible to study the earliest events associated with vascular hydroxyapatite deposition at the cellular and molecular levels. This target-selective molecular imaging probe approach holds high potential for revealing early pathophysiological changes, leading to progression, regression, or stabilization of cardiovascular diseases. PMID:22877867

  18. EIR: enterprise imaging repository, an alternative imaging archiving and communication system.

    PubMed

    Bian, Jiang; Topaloglu, Umit; Lane, Cheryl

    2009-01-01

    The enormous number of studies performed at the Nuclear Medicine Department of University of Arkansas for Medical Sciences (UAMS) generates a huge amount PET/CT images daily. A DICOM workstation had been used as "mini-PACS" to route all studies, which is historically proven to be slow due to various reasons. However, replacing the workstation with a commercial PACS server is not only cost inefficient; and more often, the PACS vendors are reluctant to take responsibility for the final integration of these components. Therefore, in this paper, we propose an alternative imaging archiving and communication system called Enterprise Imaging Repository (EIR). EIR consists of two distinguished components: an image processing daemon and a user friendly web interface. EIR not only reduces the overall waiting time of transferring a study from the modalities to radiologists' workstations, but also provides a more preferable presentation.

  19. Reducing noise component on medical images

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana

    2018-04-01

    Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.

  20. Multi-template image matching using alpha-rooted biquaternion phase correlation with application to logo recognition

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2011-06-01

    Hypercomplex approaches are seeing increased application to signal and image processing problems. The use of multicomponent hypercomplex numbers, such as quaternions, enables the simultaneous co-processing of multiple signal or image components. This joint processing capability can provide improved exploitation of the information contained in the data, thereby leading to improved performance in detection and recognition problems. In this paper, we apply hypercomplex processing techniques to the logo image recognition problem. Specifically, we develop an image matcher by generalizing classical phase correlation to the biquaternion case. We further incorporate biquaternion Fourier domain alpha-rooting enhancement to create Alpha-Rooted Biquaternion Phase Correlation (ARBPC). We present the mathematical properties which justify use of ARBPC as an image matcher. We present numerical performance results of a logo verification problem using real-world logo data, demonstrating the performance improvement obtained using the hypercomplex approach. We compare results of the hypercomplex approach to standard multi-template matching approaches.

  1. The effect of a brief mindfulness induction on processing of emotional images: an ERP study.

    PubMed

    Eddy, Marianna D; Brunyé, Tad T; Tower-Richardi, Sarah; Mahoney, Caroline R; Taylor, Holly A

    2015-01-01

    The ability to effectively direct one's attention is an important aspect of regulating emotions and a component of mindfulness. Mindfulness practices have been established as effective interventions for mental and physical illness; however, the underlying neural mechanisms of mindfulness and how they relate to emotional processing have not been explored in depth. The current study used a within-subjects repeated measures design to examine if focused breathing, a brief mindfulness induction, could modulate event-related potentials (ERPs) during emotional image processing relative to a control condition. We related ERP measures of processing positive, negative, and neutral images (the P300 and late positive potential - LPP) to state and trait mindfulness measures. Overall, the brief mindfulness induction condition did not influence ERPs reflecting emotional processing; however, in the brief mindfulness induction condition, those participants who reported feeling more decentered (a subscale of the Toronto Mindfulness Scale) after viewing the images had reduced P300 responses to negative versus neutral images.

  2. The effect of a brief mindfulness induction on processing of emotional images: an ERP study

    PubMed Central

    Eddy, Marianna D.; Brunyé, Tad T.; Tower-Richardi, Sarah; Mahoney, Caroline R.; Taylor, Holly A.

    2015-01-01

    The ability to effectively direct one’s attention is an important aspect of regulating emotions and a component of mindfulness. Mindfulness practices have been established as effective interventions for mental and physical illness; however, the underlying neural mechanisms of mindfulness and how they relate to emotional processing have not been explored in depth. The current study used a within-subjects repeated measures design to examine if focused breathing, a brief mindfulness induction, could modulate event-related potentials (ERPs) during emotional image processing relative to a control condition. We related ERP measures of processing positive, negative, and neutral images (the P300 and late positive potential – LPP) to state and trait mindfulness measures. Overall, the brief mindfulness induction condition did not influence ERPs reflecting emotional processing; however, in the brief mindfulness induction condition, those participants who reported feeling more decentered (a subscale of the Toronto Mindfulness Scale) after viewing the images had reduced P300 responses to negative versus neutral images. PMID:26441766

  3. Blind deconvolution with principal components analysis for wide-field and small-aperture telescopes

    NASA Astrophysics Data System (ADS)

    Jia, Peng; Sun, Rongyu; Wang, Weinan; Cai, Dongmei; Liu, Huigen

    2017-09-01

    Telescopes with a wide field of view (greater than 1°) and small apertures (less than 2 m) are workhorses for observations such as sky surveys and fast-moving object detection, and play an important role in time-domain astronomy. However, images captured by these telescopes are contaminated by optical system aberrations, atmospheric turbulence, tracking errors and wind shear. To increase the quality of images and maximize their scientific output, we propose a new blind deconvolution algorithm based on statistical properties of the point spread functions (PSFs) of these telescopes. In this new algorithm, we first construct the PSF feature space through principal component analysis, and then classify PSFs from a different position and time using a self-organizing map. According to the classification results, we divide images of the same PSF types and select these PSFs to construct a prior PSF. The prior PSF is then used to restore these images. To investigate the improvement that this algorithm provides for data reduction, we process images of space debris captured by our small-aperture wide-field telescopes. Comparing the reduced results of the original images and the images processed with the standard Richardson-Lucy method, our method shows a promising improvement in astrometry accuracy.

  4. Tse computers. [ultrahigh speed optical processing for two dimensional binary image

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III

    1977-01-01

    An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guetaz, Laure; Lopez-Haro, M.; Escribano, S.

    Investigation of membrane/electrode assembly (MEA) microstructure has become an essential step to optimize the MEA components and manufacturing processes or to study the MEA degradation. For these investigations, transmission electron microscopy (TEM) is a tool of choice as it provides direct imaging of the different components. TEM is then widely used for analyzing the catalyst nanoparticles and their carbon support. However, the ionomer inside the electrode is more difficult to be imaged. The difficulties come from the fact that the ionomer forms an ultrathin layer surrounding the carbon particles and in addition, these two components, having similar density, present nomore » difference in contrast. In this paper, we show how the recent progresses in TEM techniques as spherical aberration (Cs) corrected HRTEM, electron tomography and X-EDS elemental mapping provide new possibilities for imaging this ionomer network and consequently to study its degradation.« less

  6. Holostrain system: a powerful tool for experimental mechanics

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1992-09-01

    A portable holographic interferometer that can be used to measure displacements and strains in all kinds of mechanical components and structures is described. The holostrain system captures images on a TV camera that detects interference patterns produced by laser illumination. The video signals are digitized. The digitized interferograms are processed by a fast processing system. The output of the system are the strains or the stresses of the observed mechanical component or structure.

  7. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  8. DKIST visible broadband imager data processing pipeline

    NASA Astrophysics Data System (ADS)

    Beard, Andrew; Cowan, Bruce; Ferayorni, Andrew

    2014-07-01

    The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.

  9. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    PubMed Central

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-01

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908

  10. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    PubMed

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  11. Steerable Principal Components for Space-Frequency Localized Images*

    PubMed Central

    Landa, Boris; Shkolnisky, Yoel

    2017-01-01

    As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879

  12. Onboard spectral imager data processor

    NASA Astrophysics Data System (ADS)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  13. Parallel Processing of Images in Mobile Devices using BOINC

    NASA Astrophysics Data System (ADS)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  14. Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank.

    PubMed

    Alfaro-Almagro, Fidel; Jenkinson, Mark; Bangerter, Neal K; Andersson, Jesper L R; Griffanti, Ludovica; Douaud, Gwenaëlle; Sotiropoulos, Stamatios N; Jbabdi, Saad; Hernandez-Fernandez, Moises; Vallee, Emmanuel; Vidaurre, Diego; Webster, Matthew; McCarthy, Paul; Rorden, Christopher; Daducci, Alessandro; Alexander, Daniel C; Zhang, Hui; Dragonu, Iulius; Matthews, Paul M; Miller, Karla L; Smith, Stephen M

    2018-02-01

    UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. The InSAR Scientific Computing Environment (ISCE): An Earth Science SAR Processing Framework, Toolbox, and Foundry

    NASA Astrophysics Data System (ADS)

    Agram, P. S.; Gurrola, E. M.; Lavalle, M.; Sacco, G. F.; Rosen, P. A.

    2016-12-01

    The InSAR Scientific Computing Environment (ISCE) provides both a modular, flexible, and extensible framework for building software components and applications that work together seamlessly as well as a toolbox for processing InSAR data into higher level geodetic image products from a diverse array of radar satellites and aircraft. ISCE easily scales to serve as the SAR processing engine at the core of the NASA JPL Advanced Rapid Imaging and Analysis (ARIA) Center for Natural Hazards as well as a software toolbox for individual scientists working with SAR data. ISCE is planned as the foundational element in processing NISAR data, enabling a new class of analyses that take greater advantage of the long time and large spatial scales of these data. ISCE in ARIA is also a SAR Foundry for development of new processing components and workflows to meet the needs of both large processing centers and individual users. The ISCE framework contains object-oriented Python components layered to construct Python InSAR components that manage legacy Fortran/C InSAR programs. The Python user interface enables both command-line deployment of workflows as well as an interactive "sand box" (the Python interpreter) where scientists can "play" with the data. Recent developments in ISCE include the addition of components to ingest Sentinel-1A SAR data (both stripmap and TOPS-mode) and a new workflow for processing the TOPS-mode data. New components are being developed to exploit polarimetric-SAR data to provide the ecosystem and land-cover/land-use change communities with rigorous and efficient tools to perform multi-temporal, polarimetric and tomographic analyses in order to generate calibrated, geocoded and mosaicked Level-2 and Level-3 products (e.g., maps of above-ground biomass or forest disturbance). ISCE has been downloaded by over 200 users by a license for WinSAR members through the Unavco.org website. Others may apply directly to JPL for a license at download.jpl.nasa.gov.

  16. Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio

    2017-03-01

    It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.

  17. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    PubMed

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  18. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide–Semiconductor Image Sensors

    PubMed Central

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-01-01

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components. PMID:28468324

  19. Ship Speed Retrieval From Single Channel TerraSAR-X Data

    NASA Astrophysics Data System (ADS)

    Soccorsi, Matteo; Lehner, Susanne

    2010-04-01

    A method to estimate the speed of a moving ship is presented. The technique, introduced in Kirscht (1998), is extended to marine application and validated on TerraSAR-X High-Resolution (HR) data. The generation of a sequence of single-look SAR images from a single- channel image corresponds to an image time series with reduced resolution. This allows applying change detection techniques on the time series to evaluate the velocity components in range and azimuth of the ship. The evaluation of the displacement vector of a moving target in consecutive images of the sequence allows the estimation of the azimuth velocity component. The range velocity component is estimated by evaluating the variation of the signal amplitude during the sequence. In order to apply the technique on TerraSAR-X Spot Light (SL) data a further processing step is needed. The phase has to be corrected as presented in Eineder et al. (2009) due to the SL acquisition mode; otherwise the image sequence cannot be generated. The analysis, when possible validated by the Automatic Identification System (AIS), was performed in the framework of the ESA project MARISS.

  20. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  1. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  2. A data processing method based on tracking light spot for the laser differential confocal component parameters measurement system

    NASA Astrophysics Data System (ADS)

    Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin

    2013-12-01

    We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.

  3. CRT image recording evaluation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Performance capabilities and limitations of a fiber optic coupled line scan CRT image recording system were investigated. The test program evaluated the following components: (1). P31 phosphor CRT with EMA faceplate; (2). P31 phosphor CRT with clear clad faceplate; (3). Type 7743 semi-gloss dry process positive print paper; (4). Type 777 flat finish dry process positive print paper; (5). Type 7842 dry process positive film; and (6). Type 1971 semi-gloss wet process positive print paper. Detailed test procedures used in each test are provided along with a description of each test, the test data, and an analysis of the results.

  4. Dimensionality and noise in energy selective x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, Robert E.

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurementmore » noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 10{sup 3}. With the soft tissue component, it is 2.7 × 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.« less

  5. Application of High Speed Digital Image Correlation in Rocket Engine Hot Fire Testing

    NASA Technical Reports Server (NTRS)

    Gradl, Paul R.; Schmidt, Tim

    2016-01-01

    Hot fire testing of rocket engine components and rocket engine systems is a critical aspect of the development process to understand performance, reliability and system interactions. Ground testing provides the opportunity for highly instrumented development testing to validate analytical model predictions and determine necessary design changes and process improvements. To properly obtain discrete measurements for model validation, instrumentation must survive in the highly dynamic and extreme temperature application of hot fire testing. Digital Image Correlation has been investigated and being evaluated as a technique to augment traditional instrumentation during component and engine testing providing further data for additional performance improvements and cost savings. The feasibility of digital image correlation techniques were demonstrated in subscale and full scale hotfire testing. This incorporated a pair of high speed cameras to measure three-dimensional, real-time displacements and strains installed and operated under the extreme environments present on the test stand. The development process, setup and calibrations, data collection, hotfire test data collection and post-test analysis and results are presented in this paper.

  6. Digital image comparison by subtracting contextual transformations—percentile rank order differentiation

    USGS Publications Warehouse

    Wehde, M. E.

    1995-01-01

    The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.

  7. [Effect of body image in adolescent orthodontic treatment].

    PubMed

    Minghui, Peng; Jing, Kang; Xiao, Deng

    2017-10-01

    This study was designed to probe the psychological factors adolescent orthodontic patients, the role of body image and self-esteem in the whole process of orthodontic treatment and the impact on the efficacy and satisfaction of orthodontic. Five hundred and twenty-eight patients were selected in this study. The Aesthetic Component of the Index of Orthodontic Treatment Need (IOTN-AC) , Rosenberg Self-Esteem Scale (SES), Negative Physical Self-General (NPS-G) and other body analysis scale study after orthodontic lasted 18-24 months were used to investigate the role of body image in adolescent orthodontic treatment. Esthetic evaluation of patients teeth after correction had been significantly improved, patient self-evaluation difference IOTN-AC doctor evaluation, Psychosocial Impact of Dental Aesthetics Questionnaire-tooth confidence, aesthetic concerns, psychological impact and social function were significantly improved. The improvement of the dental aesthetics component (T2 when doctors evaluate IOTN-AC) was positively correlated with the evaluation of the efficacy, and was significantly negatively correlated with the negative emotions of patients at baseline. Negative body image-dental dissatisfied-cognitive component and the affective component, the overall negative body image and negative emotions can predict patient satisfaction with treatment efficacy. Orthodontic treatment not only improves the self-aesthetic evaluation of adolescent patients, but also has a positive effect on the mental health of adolescent patients.

  8. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    PubMed

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  9. High data volume and transfer rate techniques used at NASA's image processing facility

    NASA Technical Reports Server (NTRS)

    Heffner, P.; Connell, E.; Mccaleb, F.

    1978-01-01

    Data storage and transfer operations at a new image processing facility are described. The equipment includes high density digital magnetic tape drives and specially designed controllers to provide an interface between the tape drives and computerized image processing systems. The controller performs the functions necessary to convert the continuous serial data stream from the tape drive to a word-parallel blocked data stream which then goes to the computer-based system. With regard to the tape packing density, 1.8 times 10 to the tenth data bits are stored on a reel of one-inch tape. System components and their operation are surveyed, and studies on advanced storage techniques are summarized.

  10. Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.

    2013-06-01

    Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.

  11. Color image encryption by using Yang-Gu mixture amplitude-phase retrieval algorithm in gyrator transform domain and two-dimensional Sine logistic modulation map

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Liu, Benqing; Wang, Qiang; Li, Ye; Liang, Junli

    2015-12-01

    A color image encryption scheme is proposed based on Yang-Gu mixture amplitude-phase retrieval algorithm and two-coupled logistic map in gyrator transform domain. First, the color plaintext image is decomposed into red, green and blue components, which are scrambled individually by three random sequences generated by using the two-dimensional Sine logistic modulation map. Second, each scrambled component is encrypted into a real-valued function with stationary white noise distribution in the iterative amplitude-phase retrieval process in the gyrator transform domain, and then three obtained functions are considered as red, green and blue channels to form the color ciphertext image. Obviously, the ciphertext image is real-valued function and more convenient for storing and transmitting. In the encryption and decryption processes, the chaotic random phase mask generated based on logistic map is employed as the phase key, which means that only the initial values are used as private key and the cryptosystem has high convenience on key management. Meanwhile, the security of the cryptosystem is enhanced greatly because of high sensitivity of the private keys. Simulation results are presented to prove the security and robustness of the proposed scheme.

  12. Infrared fix pattern noise reduction method based on Shearlet Transform

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Zhao, Dong; Cheng, Kuanhong; Qian, Kun; Qin, Hanlin

    2018-06-01

    The non-uniformity correction (NUC) is an effective way to reduce fix pattern noise (FPN) and improve infrared image quality. The temporal high-pass NUC method is a kind of practical NUC method because of its simple implementation. However, traditional temporal high-pass NUC methods rely deeply on the scene motion and suffer image ghosting and blurring. Thus, this paper proposes an improved NUC method based on Shearlet Transform (ST). First, the raw infrared image is decomposed into multiscale and multi-orientation subbands by ST and the FPN component mainly exists in some certain high-frequency subbands. Then, high-frequency subbands are processed by the temporal filter to extract the FPN due to its low-frequency characteristics. Besides, each subband has a confidence parameter to determine the degree of FPN, which is estimated by the variance of subbands adaptively. At last, the process of NUC is achieved by subtracting the estimated FPN component from the original subbands and the corrected infrared image can be obtained by the inverse ST. The performance of the proposed method is evaluated with real and synthetic infrared image sequences thoroughly. Experimental results indicate that the proposed method can reduce heavily FPN with less roughness and RMSE.

  13. Determination of Hydrodynamic Parameters on Two--Phase Flow Gas - Liquid in Pipes with Different Inclination Angles Using Image Processing Algorithm

    NASA Astrophysics Data System (ADS)

    Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda

    2009-11-01

    In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.

  14. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  15. In-TFT-array-process micro defect inspection using nonlinear principal component analysis.

    PubMed

    Liu, Yi-Hung; Wang, Chi-Kai; Ting, Yung; Lin, Wei-Zhi; Kang, Zhi-Hao; Chen, Ching-Shun; Hwang, Jih-Shang

    2009-11-20

    Defect inspection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacture, and has received much attention in the field of automatic optical inspection (AOI). Previously, most focus was put on the problems of macro-scale Mura-defect detection in cell process, but it has recently been found that the defects which substantially influence the yield rate of LCD panels are actually those in the TFT array process, which is the first process in TFT-LCD manufacturing. Defect inspection in TFT array process is therefore considered a difficult task. This paper presents a novel inspection scheme based on kernel principal component analysis (KPCA) algorithm, which is a nonlinear version of the well-known PCA algorithm. The inspection scheme can not only detect the defects from the images captured from the surface of LCD panels, but also recognize the types of the detected defects automatically. Results, based on real images provided by a LCD manufacturer in Taiwan, indicate that the KPCA-based defect inspection scheme is able to achieve a defect detection rate of over 99% and a high defect classification rate of over 96% when the imbalanced support vector machine (ISVM) with 2-norm soft margin is employed as the classifier. More importantly, the inspection time is less than 1 s per input image.

  16. Excitation-resolved multispectral method for imaging pharmacokinetic parameters in dynamic fluorescent molecular tomography

    NASA Astrophysics Data System (ADS)

    Chen, Maomao; Zhou, Yuan; Su, Han; Zhang, Dong; Luo, Jianwen

    2017-04-01

    Imaging of the pharmacokinetic parameters in dynamic fluorescence molecular tomography (DFMT) can provide three-dimensional metabolic information for biological studies and drug development. However, owing to the ill-posed nature of the FMT inverse problem, the relatively low quality of the parametric images makes it difficult to investigate the different metabolic processes of the fluorescent targets with small distances. An excitation-resolved multispectral DFMT method is proposed; it is based on the fact that the fluorescent targets with different concentrations show different variations in the excitation spectral domain and can be considered independent signal sources. With an independent component analysis method, the spatial locations of different fluorescent targets can be decomposed, and the fluorescent yields of the targets at different time points can be recovered. Therefore, the metabolic process of each component can be independently investigated. Simulations and phantom experiments are carried out to evaluate the performance of the proposed method. The results demonstrated that the proposed excitation-resolved multispectral method can effectively improve the reconstruction accuracy of the parametric images in DFMT.

  17. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  18. An optical processor for object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Sloan, J.; Udomkesmalee, S.

    1987-01-01

    The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.

  19. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  20. LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites

    NASA Technical Reports Server (NTRS)

    Wukelic, G. E. (Principal Investigator)

    1983-01-01

    No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.

  1. ID card number detection algorithm based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  2. Model-based error diffusion for high fidelity lenticular screening.

    PubMed

    Lau, Daniel; Smith, Trebor

    2006-04-17

    Digital halftoning is the process of converting a continuous-tone image into an arrangement of black and white dots for binary display devices such as digital ink-jet and electrophotographic printers. As printers are achieving print resolutions exceeding 1,200 dots per inch, it is becoming increasingly important for halftoning algorithms to consider the variations and interactions in the size and shape of printed dots between neighboring pixels. In the case of lenticular screening where statistically independent images are spatially multiplexed together, ignoring these variations and interactions, such as dot overlap, will result in poor lenticular image quality. To this end, we describe our use of model-based error-diffusion for the lenticular screening problem where statistical independence between component images is achieved by restricting the diffusion of error to only those pixels of the same component image where, in order to avoid instabilities, the proposed approach involves a novel error-clipping procedure.

  3. Segmenting texts from outdoor images taken by mobile phones using color features

    NASA Astrophysics Data System (ADS)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.

  4. Reverse-time migration for subsurface imaging using single- and multi- frequency components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Kim, Y.; Kim, S.; Chung, W.; Shin, S.; Lee, D.

    2017-12-01

    Reverse-time migration is a seismic data processing method for obtaining accurate subsurface structure images from seismic data. This method has been applied to obtain more precise complex geological structure information, including steep dips, by considering wave propagation characteristics based on two-way traveltime. Recently, various studies have reported the characteristics of acquired datasets from different types of media. In particular, because real subsurface media is comprised of various types of structures, seismic data represent various responses. Among them, frequency characteristics can be used as an important indicator for analyzing wave propagation in subsurface structures. All frequency components are utilized in conventional reverse-time migration, but analyzing each component is required because they contain inherent seismic response characteristics. In this study, we propose a reverse-time migration method that utilizes single- and multi- frequency components for analyzing subsurface imaging. We performed a spectral decomposition to utilize the characteristics of non-stationary seismic data. We propose two types of imaging conditions, in which decomposed signals are applied in complex and envelope traces. The SEG/EAGE Overthrust model was used to demonstrate the proposed method, and the 1st derivative Gaussian function with a 10 Hz cutoff was used as the source signature. The results were more accurate and stable when relatively lower frequency components in the effective frequency range were used. By combining the gradient obtained from various frequency components, we confirmed that the results are clearer than the conventional method using all frequency components. Also, further study is required to effectively combine the multi-frequency components.

  5. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  6. Volumetric segmentation of range images for printed circuit board inspection

    NASA Astrophysics Data System (ADS)

    Van Dop, Erik R.; Regtien, Paul P. L.

    1996-10-01

    Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.

  7. TheHiveDB image data management and analysis framework.

    PubMed

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-06

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative.

  8. TheHiveDB image data management and analysis framework

    PubMed Central

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-01

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative. PMID:24432000

  9. Spatial super-resolution of colored images by micro mirrors

    NASA Astrophysics Data System (ADS)

    Dahan, Daniel; Yaacobi, Ami; Pinsky, Ephraim; Zalevsky, Zeev

    2018-06-01

    In this paper, we present two methods of dealing with the geometric resolution limit of color imaging sensors. It is possible to overcome the pixel size limit by adding a digital micro-mirror device component on the intermediate image plane of an optical system, and adapting its pattern in a computerized manner before sampling each frame. The full RGB image can be reconstructed from the Bayer camera by building a dedicated optical design, or by adjusting the demosaicing process to the special format of the enhanced image.

  10. Three-dimensional imaging technology offers promise in medicine.

    PubMed

    Karako, Kenji; Wu, Qiong; Gao, Jianjun

    2014-04-01

    Medical imaging plays an increasingly important role in the diagnosis and treatment of disease. Currently, medical equipment mainly has two-dimensional (2D) imaging systems. Although this conventional imaging largely satisfies clinical requirements, it cannot depict pathologic changes in 3 dimensions. The development of three-dimensional (3D) imaging technology has encouraged advances in medical imaging. Three-dimensional imaging technology offers doctors much more information on a pathology than 2D imaging, thus significantly improving diagnostic capability and the quality of treatment. Moreover, the combination of 3D imaging with augmented reality significantly improves surgical navigation process. The advantages of 3D imaging technology have made it an important component of technological progress in the field of medical imaging.

  11. The Dark Energy Survey Image Processing Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morganson, E.; et al.

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On amore » bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.« less

  12. Advanced image based methods for structural integrity monitoring: Review and prospects

    NASA Astrophysics Data System (ADS)

    Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.

    2018-02-01

    There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.

  13. Emotional salience of the image component facilitates recall of the text of cigarette warning labels.

    PubMed

    Wang, An-Li; Shi, Zhenhao; Fairchild, Victoria P; Aronowitz, Catherine A; Langleben, Daniel D

    2018-04-26

    Graphic warning labels (GWLs) on cigarette packages, that combine textual warnings with emotionally salient images depicting the adverse health consequences of smoking, have been adopted in most European countries. In the US, the courts deemed the evidence justifying the inclusion of emotionally salient images in GWLs insufficient and put the implementation on hold. We conducted a controlled experimental study examining the effect of emotional salience of GWL's images on the recall of their text component. Seventy-three non-treatment-seeking daily smokers received cigarette packs carrying GWLs for a period of 4 weeks. Participants were randomly assigned to receive packs with GWLs previously rated as eliciting high or low level of emotional reaction (ER). The two conditions differed in respect to images but used the same textual warning statements. Participants' recognition of GWL images and statements were tested separately at baseline and again after the 4-week repetitive exposure. Textual warning statements were recognized more accurately when paired with high ER images than when paired with low ER images, both at baseline and after daily exposure to GWLs over a 4-week period. The results suggest that emotional salience of GWLs facilitates cognitive processing of the textual warnings, resulting in better remembering of the information about the health hazards of smoking. Thus, high emotional salience of the pictorial component of GWLs is essential for their overall effectiveness.

  14. Spectral imaging applications: Remote sensing, environmental monitoring, medicine, military operations, factory automation and manufacturing

    NASA Technical Reports Server (NTRS)

    Gat, N.; Subramanian, S.; Barhen, J.; Toomarian, N.

    1996-01-01

    This paper reviews the activities at OKSI related to imaging spectroscopy presenting current and future applications of the technology. The authors discuss the development of several systems including hardware, signal processing, data classification algorithms and benchmarking techniques to determine algorithm performance. Signal processing for each application is tailored by incorporating the phenomenology appropriate to the process, into the algorithms. Pixel signatures are classified using techniques such as principal component analyses, generalized eigenvalue analysis and novel very fast neural network methods. The major hyperspectral imaging systems developed at OKSI include the Intelligent Missile Seeker (IMS) demonstration project for real-time target/decoy discrimination, and the Thermal InfraRed Imaging Spectrometer (TIRIS) for detection and tracking of toxic plumes and gases. In addition, systems for applications in medical photodiagnosis, manufacturing technology, and for crop monitoring are also under development.

  15. The Neurobiological Basis of Reading.

    ERIC Educational Resources Information Center

    Joseph, Jane; Noble, Kimberly; Eden, Guinevere

    2001-01-01

    This paper reviews studies using positron emission tomography and functional magnetic resonance imaging in adults to study the reading process and notes that general networks of regions seem to be uniquely associated with different components of the reading process. Findings are evaluated in light of technical and experimental limitations and…

  16. Polarization Imaging Apparatus

    NASA Technical Reports Server (NTRS)

    Zou, Yingyin K.; Chen, Qiushui

    2010-01-01

    A polarization imaging apparatus has shown promise as a prototype of instruments for medical imaging with contrast greater than that achievable by use of non-polarized light. The underlying principles of design and operation are derived from observations that light interacts with tissue ultrastructures that affect reflectance, scattering, absorption, and polarization of light. The apparatus utilizes high-speed electro-optical components for generating light properties and acquiring polarization images through aligned polarizers. These components include phase retarders made of OptoCeramic (registered TradeMark) material - a ceramic that has a high electro-optical coefficient. The apparatus includes a computer running a program that implements a novel algorithm for controlling the phase retarders, capturing image data, and computing the Stokes polarization images. Potential applications include imaging of superficial cancers and other skin lesions, early detection of diseased cells, and microscopic analysis of tissues. The high imaging speed of this apparatus could be beneficial for observing live cells or tissues, and could enable rapid identification of moving targets in astronomy and national defense. The apparatus could also be used as an analysis tool in material research and industrial processing.

  17. Computer-assisted image processing to detect spores from the fungus Pandora neoaphidis.

    PubMed

    Korsnes, Reinert; Westrum, Karin; Fløistad, Erling; Klingen, Ingeborg

    2016-01-01

    This contribution demonstrates an example of experimental automatic image analysis to detect spores prepared on microscope slides derived from trapping. The application is to monitor aerial spore counts of the entomopathogenic fungus Pandora neoaphidis which may serve as a biological control agent for aphids. Automatic detection of such spores can therefore play a role in plant protection. The present approach for such detection is a modification of traditional manual microscopy of prepared slides, where autonomous image recording precedes computerised image analysis. The purpose of the present image analysis is to support human visual inspection of imagery data - not to replace it. The workflow has three components:•Preparation of slides for microscopy.•Image recording.•Computerised image processing where the initial part is, as usual, segmentation depending on the actual data product. Then comes identification of blobs, calculation of principal axes of blobs, symmetry operations and projection on a three parameter egg shape space.

  18. Enhancement of chest radiographs using eigenimage processing

    NASA Astrophysics Data System (ADS)

    Bones, Philip J.; Butler, Anthony P. H.; Hurrell, Michael

    2006-08-01

    Frontal chest radiographs ("chest X-rays") are routinely used by medical personnel to assess patients for a wide range of suspected disorders. Often large numbers of images need to be analyzed. Furthermore, at times the images need to analyzed ("reported") when no radiological expert is available. A system which enhances the images in such a way that abnormalities are more obvious is likely to reduce the chance that an abnormality goes unnoticed. The authors previously reported the use of principal components analysis to derive a basis set of eigenimages from a training set made up of images from normal subjects. The work is here extended to investigate how best to emphasize the abnormalities in chest radiographs. Results are also reported for various forms of image normalizing transformations used in performing the eigenimage processing.

  19. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  20. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  1. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  2. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  3. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  4. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  5. The Pan-STARRS PS1 Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  6. PAT: From Western solid dosage forms to Chinese materia medica preparations using NIR-CI.

    PubMed

    Zhou, Luwei; Xu, Manfei; Wu, Zhisheng; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    Near-infrared chemical imaging (NIR-CI) is an emerging technology that combines traditional near-infrared spectroscopy with chemical imaging. Therefore, NIR-CI can extract spectral information from pharmaceutical products and simultaneously visualize the spatial distribution of chemical components. The rapid and non-destructive features of NIR-CI make it an attractive process analytical technology (PAT) for identifying and monitoring critical control parameters during the pharmaceutical manufacturing process. This review mainly focuses on the pharmaceutical applications of NIR-CI in each unit operation during the manufacturing processes, from the Western solid dosage forms to the Chinese materia medica preparations. Finally, future applications of chemical imaging in the pharmaceutical industry are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  7. LDRD final report on imaging self-organization of proteins in membranes by photocatalytic nano-tagging.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavadil, Kevin Robert; Shelnutt, John Allen; Sasaki, Darryl Yoshio

    We have developed a new nanotagging technology for detecting and imaging the self-organization of proteins and other components of membranes at nanometer resolution for the purpose of investigating cell signaling and other membrane-mediated biological processes. We used protein-, lipid-, or drug-bound porphyrin photocatalysts to grow in-situ nanometer-sized metal particles, which reveal the location of the porphyrin-labeled molecules by electron microscopy. We initially used photocatalytic nanotagging to image assembled multi-component proteins and to monitor the distribution of lipids and porphyrin labels in liposomes. For example, by exchanging the heme molecules in hemoproteins with a photocatalytic tin porphyrin, a nanoparticle was grownmore » at each heme site of the protein. The result obtained from electron microscopy for a tagged multi-subunit protein such as hemoglobin is a symmetric constellation of a specific number of nanoparticle tags, four in the case of the hemoglobin tetramer. Methods for covalently linking photocatalytic porphyrin labels to lipids and proteins were also developed to detect and image the self-organization of lipids, protein-protein supercomplexes, and membrane-protein complexes. Procedures for making photocatalytic porphyrin-drug, porphyrin-lipid, and porphyrin-protein hybrids for non-porphyrin-binding proteins and membrane components were pursued and the first porphyrin-labeled lipids was investigated in liposomal membrane models. Our photocatalytic nanotagging technique may ultimately allow membrane self-organization and cell signaling processes to be imaged in living cells. Fluorescence and plasmonic spectra of the tagged proteins might also provide additional information about protein association and membrane organization. In addition, a porphyrin-aspirin or other NSAID hybrid may be used to grow metal nanotags for the pharmacologically important COX enzymes in membranes so that the distribution of the protein can be imaged at the nanometer scale.« less

  8. Solid state temperature-dependent NUC (non-uniformity correction) in uncooled LWIR (long-wave infrared) imaging system

    NASA Astrophysics Data System (ADS)

    Cao, Yanpeng; Tisse, Christel-Loic

    2013-06-01

    In uncooled LWIR microbolometer imaging systems, temperature fluctuations of FPA (Focal Plane Array) as well as lens and mechanical components placed along the optical path result in thermal drift and spatial non-uniformity. These non-idealities generate undesirable FPN (Fixed-Pattern-Noise) that is difficult to remove using traditional, individual shutterless and TEC-less (Thermo-Electric Cooling) techniques. In this paper we introduce a novel single-image based processing approach that marries the benefits of both statistical scene-based and calibration-based NUC algorithms, without relying neither on extra temperature reference nor accurate motion estimation, to compensate the resulting temperature-dependent non-uniformities. Our method includes two subsequent image processing steps. Firstly, an empirical behavioral model is derived by calibrations to characterize the spatio-temporal response of the microbolometric FPA to environmental and scene temperature fluctuations. Secondly, we experimentally establish that the FPN component caused by the optics creates a spatio-temporally continuous, low frequency, low-magnitude variation of the image intensity. We propose to make use of this property and learn a prior on the spatial distribution of natural image gradients to infer the correction function for the entire image. The performance and robustness of the proposed temperature-adaptive NUC method are demonstrated by showing results obtained from a 640×512 pixels uncooled LWIR microbolometer imaging system operating over a broad range of temperature and with rapid environmental temperature changes (i.e. from -5°C to 65°C within 10 minutes).

  9. Refinery evaluation of optical imaging to locate fugitive emissions.

    PubMed

    Robinson, Donald R; Luke-Boone, Ronke; Aggarwal, Vineet; Harris, Buzz; Anderson, Eric; Ranum, David; Kulp, Thomas J; Armstrong, Karla; Sommers, Ricky; McRae, Thomas G; Ritter, Karin; Siegell, Jeffrey H; Van Pelt, Doug; Smylie, Mike

    2007-07-01

    Fugitive emissions account for approximately 50% of total hydrocarbon emissions from process plants. Federal and state regulations aiming at controlling these emissions require refineries and petrochemical plants in the United States to implement a Leak Detection and Repair Program (LDAR). The current regulatory work practice, U.S. Environment Protection Agency Method 21, requires designated components to be monitored individually at regular intervals. The annual costs of these LDAR programs in a typical refinery can exceed US$1,000,000. Previous studies have shown that a majority of controllable fugitive emissions come from a very small fraction of components. The Smart LDAR program aims to find cost-effective methods to monitor and reduce emissions from these large leakers. Optical gas imaging has been identified as one such technology that can help achieve this objective. This paper discusses a refinery evaluation of an instrument based on backscatter absorption gas imaging technology. This portable camera allows an operator to scan components more quickly and image gas leaks in real time. During the evaluation, the instrument was able to identify leaking components that were the source of 97% of the total mass emissions from leaks detected. More than 27,000 components were monitored. This was achieved in far less time than it would have taken using Method 21. In addition, the instrument was able to find leaks from components that are not required to be monitored by the current LDAR regulations. The technology principles and the parameters that affect instrument performance are also discussed in the paper.

  10. High Resolution X-Ray Micro-CT of Ultra-Thin Wall Space Components

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rauser, R. W.; Bowman, Randy R.; Bonacuse, Peter; Martin, Richard E.; Locci, I. E.; Kelley, M.

    2012-01-01

    A high resolution micro-CT system has been assembled and is being used to provide optimal characterization for ultra-thin wall space components. The Glenn Research Center NDE Sciences Team, using this CT system, has assumed the role of inspection vendor for the Advanced Stirling Convertor (ASC) project at NASA. This article will discuss many aspects of the development of the CT scanning for this type of component, including CT system overview; inspection requirements; process development, software utilized and developed to visualize, process, and analyze results; calibration sample development; results on actual samples; correlation with optical/SEM characterization; CT modeling; and development of automatic flaw recognition software. Keywords: Nondestructive Evaluation, NDE, Computed Tomography, Imaging, X-ray, Metallic Components, Thin Wall Inspection

  11. Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.

    PubMed

    Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke

    2015-04-01

    Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.

  12. Quantification and Statistical Analysis Methods for Vessel Wall Components from Stained Images with Masson's Trichrome

    PubMed Central

    Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco

    2016-01-01

    Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643

  13. An open architecture for medical image workstation

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun

    2005-04-01

    Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.

  14. Hyperspectral Image Denoising Using a Nonlocal Spectral Spatial Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Li, D.; Xu, L.; Peng, J.; Ma, J.

    2018-04-01

    Hyperspectral images (HSIs) denoising is a critical research area in image processing duo to its importance in improving the quality of HSIs, which has a negative impact on object detection and classification and so on. In this paper, we develop a noise reduction method based on principal component analysis (PCA) for hyperspectral imagery, which is dependent on the assumption that the noise can be removed by selecting the leading principal components. The main contribution of paper is to introduce the spectral spatial structure and nonlocal similarity of the HSIs into the PCA denoising model. PCA with spectral spatial structure can exploit spectral correlation and spatial correlation of HSI by using 3D blocks instead of 2D patches. Nonlocal similarity means the similarity between the referenced pixel and other pixels in nonlocal area, where Mahalanobis distance algorithm is used to estimate the spatial spectral similarity by calculating the distance in 3D blocks. The proposed method is tested on both simulated and real hyperspectral images, the results demonstrate that the proposed method is superior to several other popular methods in HSI denoising.

  15. Telerobotic Surgery: An Intelligent Systems Approach to Mitigate the Adverse Effects of Communication Delay. Chapter 4

    NASA Technical Reports Server (NTRS)

    Cardullo, Frank M.; Lewis, Harold W., III; Panfilov, Peter B.

    2007-01-01

    An extremely innovative approach has been presented, which is to have the surgeon operate through a simulator running in real-time enhanced with an intelligent controller component to enhance the safety and efficiency of a remotely conducted operation. The use of a simulator enables the surgeon to operate in a virtual environment free from the impediments of telecommunication delay. The simulator functions as a predictor and periodically the simulator state is corrected with truth data. Three major research areas must be explored in order to ensure achieving the objectives. They are: simulator as predictor, image processing, and intelligent control. Each is equally necessary for success of the project and each of these involves a significant intelligent component in it. These are diverse, interdisciplinary areas of investigation, thereby requiring a highly coordinated effort by all the members of our team, to ensure an integrated system. The following is a brief discussion of those areas. Simulator as a predictor: The delays encountered in remote robotic surgery will be greater than any encountered in human-machine systems analysis, with the possible exception of remote operations in space. Therefore, novel compensation techniques will be developed. Included will be the development of the real-time simulator, which is at the heart of our approach. The simulator will present real-time, stereoscopic images and artificial haptic stimuli to the surgeon. Image processing: Because of the delay and the possibility of insufficient bandwidth a high level of novel image processing is necessary. This image processing will include several innovative aspects, including image interpretation, video to graphical conversion, texture extraction, geometric processing, image compression and image generation at the surgeon station. Intelligent control: Since the approach we propose is in a sense predictor based, albeit a very sophisticated predictor, a controller, which not only optimizes end effector trajectory but also avoids error, is essential. We propose to investigate two different approaches to the controller design. One approach employs an optimal controller based on modern control theory; the other one involves soft computing techniques, i.e. fuzzy logic, neural networks, genetic algorithms and hybrids of these.

  16. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  17. Attention biases in preoccupation with body image: An ERP study of the role of social comparison and automaticity when processing body size.

    PubMed

    Uusberg, Helen; Peet, Krista; Uusberg, Andero; Akkermann, Kirsti

    2018-03-17

    Appearance-related attention biases are thought to contribute to body image disturbances. We investigated how preoccupation with body image is associated with attention biases to body size, focusing on the role of social comparison processes and automaticity. Thirty-six women varying on self-reported preoccupation compared their actual body size to size-modified images of either themselves or a figure-matched peer. Amplification of earlier (N170, P2) and later (P3, LPP) ERP components recorded under low vs. high concurrent working memory load were analyzed. Women with high preoccupation exhibited an earlier bias to larger bodies of both self and peer. During later processing stages, they exhibited a stronger bias to enlarged as well as reduced self-images and a lack of sensitivity to size-modifications of the peer-image. Working memory load did not affect these biases systematically. Current findings suggest that preoccupation with body image involves an earlier attention bias to weight increase cues and later over-engagement with own figure. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Imaging mycobacterial growth and division with a fluorogenic probe.

    PubMed

    Hodges, Heather L; Brown, Robert A; Crooks, John A; Weibel, Douglas B; Kiessling, Laura L

    2018-05-15

    Control and manipulation of bacterial populations requires an understanding of the factors that govern growth, division, and antibiotic action. Fluorescent and chemically reactive small molecule probes of cell envelope components can visualize these processes and advance our knowledge of cell envelope biosynthesis (e.g., peptidoglycan production). Still, fundamental gaps remain in our understanding of the spatial and temporal dynamics of cell envelope assembly. Previously described reporters require steps that limit their use to static imaging. Probes that can be used for real-time imaging would advance our understanding of cell envelope construction. To this end, we synthesized a fluorogenic probe that enables continuous live cell imaging in mycobacteria and related genera. This probe reports on the mycolyltransferases that assemble the mycolic acid membrane. This peptidoglycan-anchored bilayer-like assembly functions to protect these cells from antibiotics and host defenses. Our probe, quencher-trehalose-fluorophore (QTF), is an analog of the natural mycolyltransferase substrate. Mycolyltransferases process QTF by diverting their normal transesterification activity to hydrolysis, a process that unleashes fluorescence. QTF enables high contrast continuous imaging and the visualization of mycolyltransferase activity in cells. QTF revealed that mycolyltransferase activity is augmented before cell division and localized to the septa and cell poles, especially at the old pole. This observed localization suggests that mycolyltransferases are components of extracellular cell envelope assemblies, in analogy to the intracellular divisomes and polar elongation complexes. We anticipate QTF can be exploited to detect and monitor mycobacteria in physiologically relevant environments.

  19. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  20. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  1. Analysis of Minor Component Segregation in Ternary Powder Mixtures

    NASA Astrophysics Data System (ADS)

    Asachi, Maryam; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew

    2017-06-01

    In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.

  2. Calibrating IR Cameras for In-Situ Temperature Measurement During the Electron Beam Melting Process using Inconel 718 and Ti-Al6-V4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinwiddie, Ralph Barton; Lloyd, Peter D; Dehoff, Ryan R

    2016-01-01

    The Department of Energy s (DOE) Manufacturing Demonstration Facility (MDF) at Oak Ridge National Laboratory (ORNL) provides world-leading capabilities in advanced manufacturing (AM) facilities which leverage previous, on-going government investments in materials science research and characterization. MDF contains systems for fabricating components with complex geometries using AM techniques (i.e. 3D-Printing). Various metal alloy printers, for example, use electron beam melting (EBM) systems for creating these components which are otherwise extremely difficult- if not impossible- to machine. ORNL has partnered with manufacturers on improving the final part quality of components and developing new materials for further advancing these devices. One methodmore » being used to study (AM) processes in more depth relies on the advanced imaging capabilities at ORNL. High performance mid-wave infrared (IR) cameras are used for in-situ process monitoring and temperature measurements. However, standard factory calibrations are insufficient due to very low transmissions of the leaded glass window required for X-ray absorption. Two techniques for temperature calibrations will be presented and compared. In-situ measurement of emittance will also be discussed. Ample information can be learned from in-situ IR process monitoring of the EBM process. Ultimately, these imaging systems have the potential for routine use for online quality assurance and feedback control.« less

  3. Mental visualization of objects from cross-sectional images

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2011-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386

  4. Seismological constraints on the crustal structures generated by continental rejuvenation in northeastern China

    PubMed Central

    Zheng, Tian-Yu; He, Yu-Mei; Yang, Jin-Hui; Zhao, Liang

    2015-01-01

    Crustal rejuvenation is a key process that has shaped the characteristics of current continental structures and components in tectonic active continental regions. Geological and geochemical observations have provided insights into crustal rejuvenation, although the crustal structural fabrics have not been well constrained. Here, we present a seismic image across the North China Craton (NCC) and Central Asian Orogenic Belt (CAOB) using a velocity structure imaging technique for receiver functions from a dense array. The crustal evolution of the eastern NCC was delineated during the Mesozoic by a dominant low seismic wave velocity with velocity inversion, a relatively shallow Moho discontinuity, and a Moho offset beneath the Tanlu Fault Zone. The imaged structures and geochemical evidence, including changes in the components and ages of continental crusts and significant continental crustal growth during the Mesozoic, provide insight into the rejuvenation processes of the evolving crust in the eastern NCC caused by structural, magmatic and metamorphic processes in an extensional setting. The fossil structural fabric of the convergent boundary in the eastern CAOB indicates that the back-arc action of the Paleo-Pacific Plate subduction did not reach the hinterland of Asia. PMID:26443323

  5. Incompressible Deformation Estimation Algorithm (IDEA) from Tagged MR Images

    PubMed Central

    Liu, Xiaofeng; Abd-Elmoniem, Khaled Z.; Stone, Maureen; Murano, Emi Z.; Zhuo, Jiachen; Gullapalli, Rao P.; Prince, Jerry L.

    2013-01-01

    Measuring the three-dimensional motion of muscular tissues, e.g., the heart or the tongue, using magnetic resonance (MR) tagging is typically carried out by interpolating the two-dimensional motion information measured on orthogonal stacks of images. The incompressibility of muscle tissue is an important constraint on the reconstructed motion field and can significantly help to counter the sparsity and incompleteness of the available motion information. Previous methods utilizing this fact produced incompressible motions with limited accuracy. In this paper, we present an incompressible deformation estimation algorithm (IDEA) that reconstructs a dense representation of the three-dimensional displacement field from tagged MR images and the estimated motion field is incompressible to high precision. At each imaged time frame, the tagged images are first processed to determine components of the displacement vector at each pixel relative to the reference time. IDEA then applies a smoothing, divergence-free, vector spline to interpolate velocity fields at intermediate discrete times such that the collection of velocity fields integrate over time to match the observed displacement components. Through this process, IDEA yields a dense estimate of a three-dimensional displacement field that matches our observations and also corresponds to an incompressible motion. The method was validated with both numerical simulation and in vivo human experiments on the heart and the tongue. PMID:21937342

  6. In-TFT-Array-Process Micro Defect Inspection Using Nonlinear Principal Component Analysis

    PubMed Central

    Liu, Yi-Hung; Wang, Chi-Kai; Ting, Yung; Lin, Wei-Zhi; Kang, Zhi-Hao; Chen, Ching-Shun; Hwang, Jih-Shang

    2009-01-01

    Defect inspection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacture, and has received much attention in the field of automatic optical inspection (AOI). Previously, most focus was put on the problems of macro-scale Mura-defect detection in cell process, but it has recently been found that the defects which substantially influence the yield rate of LCD panels are actually those in the TFT array process, which is the first process in TFT-LCD manufacturing. Defect inspection in TFT array process is therefore considered a difficult task. This paper presents a novel inspection scheme based on kernel principal component analysis (KPCA) algorithm, which is a nonlinear version of the well-known PCA algorithm. The inspection scheme can not only detect the defects from the images captured from the surface of LCD panels, but also recognize the types of the detected defects automatically. Results, based on real images provided by a LCD manufacturer in Taiwan, indicate that the KPCA-based defect inspection scheme is able to achieve a defect detection rate of over 99% and a high defect classification rate of over 96% when the imbalanced support vector machine (ISVM) with 2-norm soft margin is employed as the classifier. More importantly, the inspection time is less than 1 s per input image. PMID:20057957

  7. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    PubMed

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  8. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  9. Digital holographic image fusion for a larger size object using compressive sensing

    NASA Astrophysics Data System (ADS)

    Tian, Qiuhong; Yan, Liping; Chen, Benyong; Yao, Jiabao; Zhang, Shihua

    2017-05-01

    Digital holographic imaging fusion for a larger size object using compressive sensing is proposed. In this method, the high frequency component of the digital hologram under discrete wavelet transform is represented sparsely by using compressive sensing so that the data redundancy of digital holographic recording can be resolved validly, the low frequency component is retained totally to ensure the image quality, and multiple reconstructed images with different clear parts corresponding to a laser spot size are fused to realize the high quality reconstructed image of a larger size object. In addition, a filter combing high-pass and low-pass filters is designed to remove the zero-order term from a digital hologram effectively. The digital holographic experimental setup based on off-axis Fresnel digital holography was constructed. The feasible and comparative experiments were carried out. The fused image was evaluated by using the Tamura texture features. The experimental results demonstrated that the proposed method can improve the processing efficiency and visual characteristics of the fused image and enlarge the size of the measured object effectively.

  10. Tumor or abnormality identification from magnetic resonance images using statistical region fusion based segmentation.

    PubMed

    Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish

    2016-11-01

    In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Continuous functional magnetic resonance imaging reveals dynamic nonlinearities of "dose-response" curves for finger opposition.

    PubMed

    Berns, G S; Song, A W; Mao, H

    1999-07-15

    Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.

  12. Molecular imaging with radionuclides, a powerful technique for studying biological processes in vivo

    NASA Astrophysics Data System (ADS)

    Cisbani, E.; Cusanno, F.; Garibaldi, F.; Magliozzi, M. L.; Majewski, S.; Torrioli, S.; Tsui, B. M. W.

    2007-02-01

    Our team is carrying on a systematic study devoted to the design of a SPECT detector with submillimeter resolution and adequate sensitivity (1 cps/kBq). Such system will be used for functional imaging of biological processes at molecular level in small animal. The system requirements have been defined by two relevant applications: study of atherosclerotic plaques characterization and stem cells diffusion and homing. In order to minimize costs and implementation time, the gamma detector will be based—as much as possible—on conventional components: scintillator crystal and position sensitive PhotoMultipliers read by individual channel electronics. A coded aperture collimator should be adapted to maximize the efficiency. The optimal selection of the detector components is investigated by systematic use of Monte-Carlo simulations (and laboratory validation tests); and finally preliminary results are presented and discussed here.

  13. Detection of Pigment Networks in Dermoscopy Images

    NASA Astrophysics Data System (ADS)

    Eltayef, Khalid; Li, Yongmin; Liu, Xiaohui

    2017-02-01

    One of the most important structures in dermoscopy images is the pigment network, which is also one of the most challenging and fundamental task for dermatologists in early detection of melanoma. This paper presents an automatic system to detect pigment network from dermoscopy images. The design of the proposed algorithm consists of four stages. First, a pre-processing algorithm is carried out in order to remove the noise and improve the quality of the image. Second, a bank of directional filters and morphological connected component analysis are applied to detect the pigment networks. Third, features are extracted from the detected image, which can be used in the subsequent stage. Fourth, the classification process is performed by applying feed-forward neural network, in order to classify the region as either normal or abnormal skin. The method was tested on a dataset of 200 dermoscopy images from Hospital Pedro Hispano (Matosinhos), and better results were produced compared to previous studies.

  14. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  15. Evaluation of multispectral middle infrared aircraft images for lithologic mapping the East Tintic Mountains, Utah( USA).

    USGS Publications Warehouse

    Kahle, A.B.; Rowan, L.C.

    1980-01-01

    Six channels of moultispectral middle infrared (8 to 14 micrometres) aircraft scanner data were acquired over the East Tintic mining district, Utah. The digital image data were computer processed to create a color-composite image based on principal component transformations. When combined with a visible and near infrared color-composite image from a previous flight, with limited field checking, it is possible to discriminate quartzite, carbonate rocks, quartz latitic and quartz monzonitic rocks, latitic and monzonitic rocks, silicified altered rocks, argillized altered rocks, and vegetation. -from Authors

  16. A novel weighted-direction color interpolation

    NASA Astrophysics Data System (ADS)

    Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng

    2013-08-01

    A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.

  17. Image processing for a tactile/vision substitution system using digital CNN.

    PubMed

    Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng

    2006-01-01

    In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people.

  18. Intershot Analysis of Flows in DIII-D

    NASA Astrophysics Data System (ADS)

    Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.

    2016-10-01

    Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.

  19. A neural network ActiveX based integrated image processing environment.

    PubMed

    Ciuca, I; Jitaru, E; Alaicescu, M; Moisil, I

    2000-01-01

    The paper outlines an integrated image processing environment that uses neural networks ActiveX technology for object recognition and classification. The image processing environment which is Windows based, encapsulates a Multiple-Document Interface (MDI) and is menu driven. Object (shape) parameter extraction is focused on features that are invariant in terms of translation, rotation and scale transformations. The neural network models that can be incorporated as ActiveX components into the environment allow both clustering and classification of objects from the analysed image. Mapping neural networks perform an input sensitivity analysis on the extracted feature measurements and thus facilitate the removal of irrelevant features and improvements in the degree of generalisation. The program has been used to evaluate the dimensions of the hydrocephalus in a study for calculating the Evans index and the angle of the frontal horns of the ventricular system modifications.

  20. Handheld ultrasound array imaging device

    NASA Astrophysics Data System (ADS)

    Hwang, Juin-Jet; Quistgaard, Jens

    1999-06-01

    A handheld ultrasound imaging device, one that weighs less than five pounds, has been developed for diagnosing trauma in the combat battlefield as well as a variety of commercial mobile diagnostic applications. This handheld device consists of four component ASICs, each is designed using the state of the art microelectronics technologies. These ASICs are integrated with a convex array transducer to allow high quality imaging of soft tissues and blood flow in real time. The device is designed to be battery driven or ac powered with built-in image storage and cineloop playback capability. Design methodologies of a handheld device are fundamentally different to those of a cart-based system. As system architecture, signal and image processing algorithm as well as image control circuit and software in this device is deigned suitably for large-scale integration, the image performance of this device is designed to be adequate to the intent applications. To elongate the battery life, low power design rules and power management circuits are incorporated in the design of each component ASIC. The performance of the prototype device is currently being evaluated for various applications such as a primary image screening tool, fetal imaging in Obstetrics, foreign object detection and wound assessment for emergency care, etc.

  1. Modified interferometric imaging condition for reverse-time migration

    NASA Astrophysics Data System (ADS)

    Guo, Xue-Bao; Liu, Hong; Shi, Ying

    2018-01-01

    For reverse-time migration, high-resolution imaging mainly depends on the accuracy of the velocity model and the imaging condition. In practice, however, the small-scale components of the velocity model cannot be estimated by tomographical methods; therefore, the wavefields are not accurately reconstructed from the background velocity, and the imaging process will generate artefacts. Some of the noise is due to cross-correlation of unrelated seismic events. Interferometric imaging condition suppresses imaging noise very effectively, especially the unknown random disturbance of the small-scale part. The conventional interferometric imaging condition is extended in this study to obtain a new imaging condition based on the pseudo-Wigner distribution function (WDF). Numerical examples show that the modified interferometric imaging condition improves imaging precision.

  2. A RESTful image gateway for multiple medical image repositories.

    PubMed

    Valente, Frederico; Viana-Ferreira, Carlos; Costa, Carlos; Oliveira, José Luis

    2012-05-01

    Mobile technologies are increasingly important components in telemedicine systems and are becoming powerful decision support tools. Universal access to data may already be achieved by resorting to the latest generation of tablet devices and smartphones. However, the protocols employed for communicating with image repositories are not suited to exchange data with mobile devices. In this paper, we present an extensible approach to solving the problem of querying and delivering data in a format that is suitable for the bandwidth and graphic capacities of mobile devices. We describe a three-tiered component-based gateway that acts as an intermediary between medical applications and a number of Picture Archiving and Communication Systems (PACS). The interface with the gateway is accomplished using Hypertext Transfer Protocol (HTTP) requests following a Representational State Transfer (REST) methodology, which relieves developers from dealing with complex medical imaging protocols and allows the processing of data on the server side.

  3. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    PubMed

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  4. Fusion of GFP and phase contrast images with complex shearlet transform and Haar wavelet-based energy rule.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren

    2018-03-14

    Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.

  5. Optimal focal-plane restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1989-01-01

    Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

  6. Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images

    PubMed Central

    Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049

  7. Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.

    PubMed

    Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.

  8. Directional analysis of cardiac motion field from gated fluorodeoxyglucose PET images using the Discrete Helmholtz Hodge Decomposition.

    PubMed

    Sims, J A; Giorgi, M C; Oliveira, M A; Meneghetti, J C; Gutierrez, M A

    2018-04-01

    Extract directional information related to left ventricular (LV) rotation and torsion from a 4D PET motion field using the Discrete Helmholtz Hodge Decomposition (DHHD). Synthetic motion fields were created using superposition of rotational and radial field components and cardiac fields produced using optical flow from a control and patient image. These were decomposed into curl-free (CF) and divergence-free (DF) components using the DHHD. Synthetic radial components were present in the CF field and synthetic rotational components in the DF field, with each retaining its center position, direction of motion and diameter after decomposition. Direction of rotation at apex and base for the control field were in opposite directions during systole, reversing during diastole. The patient DF field had little overall rotation with several small rotators. The decomposition of the LV motion field into directional components could assist quantification of LV torsion, but further processing stages seem necessary. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Understanding Seismic Anisotropy in Hunt Well of Fort McMurray, Canada

    NASA Astrophysics Data System (ADS)

    Malehmir, R.; Schmitt, D. R.; Chan, J.

    2014-12-01

    Seismic imaging plays vital role in geothermal systems as a sustainable energy resource. In this paper, we acquired and processed zero-offset and walk-away VSP and logging as well as surface seismic in Athabasca oil sand area, Alberta. Seismic data were highly processed to make better image geothermal system. Through data processing, properties of natural fractures such as orientation and width were studied and high probable permeable zones were mapped along the deep drilled to the depth of 2363m deep into crystalline basement rocks. In addition to logging data, seismic data were processed to build a reliable image of underground. Velocity analysis in high resolution multi-component walk-away VSP informed us about the elastic anisotropy in place. Study of the natural and induced fracture as well as elastic anisotropy in the seismic data, led us to better map stress regime around the well bore. The seismic image and map of fractures optimizes enhanced geothermal stages through hydraulic stimulation. Keywords: geothermal, anisotropy, VSP, logging, Hunt well, seismic

  10. Solution processed integrated pixel element for an imaging device

    NASA Astrophysics Data System (ADS)

    Swathi, K.; Narayan, K. S.

    2016-09-01

    We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.

  11. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  12. Tolerance assignment in optical design

    NASA Astrophysics Data System (ADS)

    Youngworth, Richard Neil

    2002-09-01

    Tolerance assignment is necessary in any engineering endeavor because fabricated systems---due to the stochastic nature of manufacturing and assembly processes---necessarily deviate from the nominal design. This thesis addresses the problem of optical tolerancing. The work can logically be split into three different components that all play an essential role. The first part addresses the modeling of manufacturing errors in contemporary fabrication and assembly methods. The second component is derived from the design aspect---the development of a cost-based tolerancing procedure. The third part addresses the modeling of image quality in an efficient manner that is conducive to the tolerance assignment process. The purpose of the first component, modeling manufacturing errors, is twofold---to determine the most critical tolerancing parameters and to understand better the effects of fabrication errors. Specifically, mid-spatial-frequency errors, typically introduced in sub-aperture grinding and polishing fabrication processes, are modeled. The implication is that improving process control and understanding better the effects of the errors makes the task of tolerance assignment more manageable. Conventional tolerancing methods do not directly incorporate cost. Consequently, tolerancing approaches tend to focus more on image quality. The goal of the second part of the thesis is to develop cost-based tolerancing procedures that facilitate optimum system fabrication by generating the loosest acceptable tolerances. This work has the potential to impact a wide range of optical designs. The third element, efficient modeling of image quality, is directly related to the cost-based optical tolerancing method. Cost-based tolerancing requires efficient and accurate modeling of the effects of errors on the performance of optical systems. Thus it is important to be able to compute the gradient and the Hessian, with respect to the parameters that need to be toleranced, of the figure of merit that measures the image quality of a system. An algebraic method for computing the gradient and the Hessian is developed using perturbation theory.

  13. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  14. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    NASA Astrophysics Data System (ADS)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  15. Creation of a virtual cutaneous tissue bank

    NASA Astrophysics Data System (ADS)

    LaFramboise, William A.; Shah, Sujal; Hoy, R. W.; Letbetter, D.; Petrosko, P.; Vennare, R.; Johnson, Peter C.

    2000-04-01

    Cellular and non-cellular constituents of skin contain fundamental morphometric features and structural patterns that correlate with tissue function. High resolution digital image acquisitions performed using an automated system and proprietary software to assemble adjacent images and create a contiguous, lossless, digital representation of individual microscope slide specimens. Serial extraction, evaluation and statistical analysis of cutaneous feature is performed utilizing an automated analysis system, to derive normal cutaneous parameters comprising essential structural skin components. Automated digital cutaneous analysis allows for fast extraction of microanatomic dat with accuracy approximating manual measurement. The process provides rapid assessment of feature both within individual specimens and across sample populations. The images, component data, and statistical analysis comprise a bioinformatics database to serve as an architectural blueprint for skin tissue engineering and as a diagnostic standard of comparison for pathologic specimens.

  16. Signal to Noise Studies on Thermographic Data with Fabricated Defects for Defense Structures

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Rajic, Nik; Genest, Marc

    2006-01-01

    There is a growing international interest in thermal inspection systems for asset life assessment and management of defense platforms. The efficacy of flash thermography is generally enhanced by applying image processing algorithms to the observations of raw temperature. Improving the defect signal to noise ratio (SNR) is of primary interest to reduce false calls and allow for easier interpretation of a thermal inspection image. Several factors affecting defect SNR were studied such as data compression and reconstruction using principal component analysis and time window processing.

  17. Contribution of non-negative matrix factorization to the classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  18. Using a Multicore Processor for Rover Autonomous Science

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin; Estlin, Tara; Clement, Bradley; Springer, Paul

    2011-01-01

    Multicore processing promises to be a critical component of future spacecraft. It provides immense increases in onboard processing power and provides an environment for directly supporting fault-tolerant computing. This paper discusses using a state-of-the-art multicore processor to efficiently perform image analysis onboard a Mars rover in support of autonomous science activities.

  19. Text analysis devices, articles of manufacture, and text analysis methods

    DOEpatents

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2015-03-31

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes a display configured to depict visible images, and processing circuitry coupled with the display and wherein the processing circuitry is configured to access a first vector of a text item and which comprises a plurality of components, to access a second vector of the text item and which comprises a plurality of components, to weight the components of the first vector providing a plurality of weighted values, to weight the components of the second vector providing a plurality of weighted values, and to combine the weighted values of the first vector with the weighted values of the second vector to provide a third vector.

  20. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    NASA Astrophysics Data System (ADS)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  1. Hospital integrated parallel cluster for fast and cost-efficient image analysis: clinical experience and research evaluation

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter

    2001-08-01

    In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.

  2. Imaging of DNA and Protein by SFM and Combined SFM-TIRF Microscopy.

    PubMed

    Grosbart, Małgorzata; Ristić, Dejan; Sánchez, Humberto; Wyman, Claire

    2018-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nm resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  3. Sample preparation for SFM imaging of DNA, proteins, and DNA-protein complexes.

    PubMed

    Ristic, Dejan; Sanchez, Humberto; Wyman, Claire

    2011-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate, and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nanometer resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA-bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA, and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  4. The development of a specialized processor for a space-based multispectral earth imager

    NASA Astrophysics Data System (ADS)

    Khedr, Mostafa E.

    2008-10-01

    This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.

  5. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, X; Liu, L; Xing, L

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less

  6. Principal and independent component analysis of concomitant functional near infrared spectroscopy and magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Schelkanova, Irina; Toronov, Vladislav

    2011-07-01

    Although near infrared spectroscopy (NIRS) is now widely used both in emerging clinical techniques and in cognitive neuroscience, the development of the apparatuses and signal processing methods for these applications is still a hot research topic. The main unresolved problem in functional NIRS is the separation of functional signals from the contaminations by systemic and local physiological fluctuations. This problem was approached by using various signal processing methods, including blind signal separation techniques. In particular, principal component analysis (PCA) and independent component analysis (ICA) were applied to the data acquired at the same wavelength and at multiple sites on the human or animal heads during functional activation. These signal processing procedures resulted in a number of principal or independent components that could be attributed to functional activity but their physiological meaning remained unknown. On the other hand, the best physiological specificity is provided by broadband NIRS. Also, a comparison with functional magnetic resonance imaging (fMRI) allows determining the spatial origin of fNIRS signals. In this study we applied PCA and ICA to broadband NIRS data to distill the components correlating with the breath hold activation paradigm and compared them with the simultaneously acquired fMRI signals. Breath holding was used because it generates blood carbon dioxide (CO2) which increases the blood-oxygen-level-dependent (BOLD) signal as CO2 acts as a cerebral vasodilator. Vasodilation causes increased cerebral blood flow which washes deoxyhaemoglobin out of the cerebral capillary bed thus increasing both the cerebral blood volume and oxygenation. Although the original signals were quite diverse, we found very few different components which corresponded to fMRI signals at different locations in the brain and to different physiological chromophores.

  7. Spatially distributed effects of mental exhaustion on resting-state FMRI networks.

    PubMed

    Esposito, Fabrizio; Otto, Tobias; Zijlstra, Fred R H; Goebel, Rainer

    2014-01-01

    Brain activity during rest is spatially coherent over functional connectivity networks called resting-state networks. In resting-state functional magnetic resonance imaging, independent component analysis yields spatially distributed network representations reflecting distinct mental processes, such as intrinsic (default) or extrinsic (executive) attention, and sensory inhibition or excitation. These aspects can be related to different treatments or subjective experiences. Among these, exhaustion is a common psychological state induced by prolonged mental performance. Using repeated functional magnetic resonance imaging sessions and spatial independent component analysis, we explored the effect of several hours of sustained cognitive performances on the resting human brain. Resting-state functional magnetic resonance imaging was performed on the same healthy volunteers in two days, with and without, and before, during and after, an intensive psychological treatment (skill training and sustained practice with a flight simulator). After each scan, subjects rated their level of exhaustion and performed an N-back task to evaluate eventual decrease in cognitive performance. Spatial maps of selected resting-state network components were statistically evaluated across time points to detect possible changes induced by the sustained mental performance. The intensive treatment had a significant effect on exhaustion and effort ratings, but no effects on N-back performances. Significant changes in the most exhausted state were observed in the early visual processing and the anterior default mode networks (enhancement) and in the fronto-parietal executive networks (suppression), suggesting that mental exhaustion is associated with a more idling brain state and that internal attention processes are facilitated to the detriment of more extrinsic processes. The described application may inspire future indicators of the level of fatigue in the neural attention system.

  8. A symmetrical image encryption scheme in wavelet and time domain

    NASA Astrophysics Data System (ADS)

    Luo, Yuling; Du, Minghui; Liu, Junxiu

    2015-02-01

    There has been an increasing concern for effective storages and secure transactions of multimedia information over the Internet. Then a great variety of encryption schemes have been proposed to ensure the information security while transmitting, but most of current approaches are designed to diffuse the data only in spatial domain which result in reducing storage efficiency. A lightweight image encryption strategy based on chaos is proposed in this paper. The encryption process is designed in transform domain. The original image is decomposed into approximation and detail components using integer wavelet transform (IWT); then as the more important component of the image, the approximation coefficients are diffused by secret keys generated from a spatiotemporal chaotic system followed by inverse IWT to construct the diffused image; finally a plain permutation is performed for diffusion image by the Logistic mapping in order to reduce the correlation between adjacent pixels further. Experimental results and performance analysis demonstrate the proposed scheme is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.

  9. Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.

    PubMed

    Xu, J

    2001-01-01

    In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.

  10. Lensless on-chip imaging of cells provides a new tool for high-throughput cell-biology and medical diagnostics.

    PubMed

    Mudanyali, Onur; Erlinger, Anthony; Seo, Sungkyu; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan

    2009-12-14

    Conventional optical microscopes image cells by use of objective lenses that work together with other lenses and optical components. While quite effective, this classical approach has certain limitations for miniaturization of the imaging platform to make it compatible with the advanced state of the art in microfluidics. In this report, we introduce experimental details of a lensless on-chip imaging concept termed LUCAS (Lensless Ultra-wide field-of-view Cell monitoring Array platform based on Shadow imaging) that does not require any microscope objectives or other bulky optical components to image a heterogeneous cell solution over an ultra-wide field of view that can span as large as approximately 18 cm(2). Moreover, unlike conventional microscopes, LUCAS can image a heterogeneous cell solution of interest over a depth-of-field of approximately 5 mm without the need for refocusing which corresponds to up to approximately 9 mL sample volume. This imaging platform records the shadows (i.e., lensless digital holograms) of each cell of interest within its field of view, and automated digital processing of these cell shadows can determine the type, the count and the relative positions of cells within the solution. Because it does not require any bulky optical components or mechanical scanning stages it offers a significantly miniaturized platform that at the same time reduces the cost, which is quite important for especially point of care diagnostic tools. Furthermore, the imaging throughput of this platform is orders of magnitude better than conventional optical microscopes, which could be exceedingly valuable for high-throughput cell-biology experiments.

  11. Lensless On-chip Imaging of Cells Provides a New Tool for High-throughput Cell-Biology and Medical Diagnostics

    PubMed Central

    Mudanyali, Onur; Erlinger, Anthony; Seo, Sungkyu; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan

    2009-01-01

    Conventional optical microscopes image cells by use of objective lenses that work together with other lenses and optical components. While quite effective, this classical approach has certain limitations for miniaturization of the imaging platform to make it compatible with the advanced state of the art in microfluidics. In this report, we introduce experimental details of a lensless on-chip imaging concept termed LUCAS (Lensless Ultra-wide field-of-view Cell monitoring Array platform based on Shadow imaging) that does not require any microscope objectives or other bulky optical components to image a heterogeneous cell solution over an ultra-wide field of view that can span as large as ~18 cm2. Moreover, unlike conventional microscopes, LUCAS can image a heterogeneous cell solution of interest over a depth-of-field of ~5 mm without the need for refocusing which corresponds to up to ~9 mL sample volume. This imaging platform records the shadows (i.e., lensless digital holograms) of each cell of interest within its field of view, and automated digital processing of these cell shadows can determine the type, the count and the relative positions of cells within the solution. Because it does not require any bulky optical components or mechanical scanning stages it offers a significantly miniaturized platform that at the same time reduces the cost, which is quite important for especially point of care diagnostic tools. Furthermore, the imaging throughput of this platform is orders of magnitude better than conventional optical microscopes, which could be exceedingly valuable for high-throughput cell-biology experiments. PMID:20010542

  12. Asteroid detection using a single multi-wavelength CCD scan

    NASA Astrophysics Data System (ADS)

    Melton, Jonathan

    2016-09-01

    Asteroid detection is a topic of great interest due to the possibility of diverting possibly dangerous asteroids or mining potentially lucrative ones. Currently, asteroid detection is generally performed by taking multiple images of the same patch of sky separated by 10-15 minutes, then subtracting the images to find movement. However, this is time consuming because of the need to revisit the same area multiple times per night. This paper describes an algorithm that can detect asteroids using a single CCD camera scan, thus cutting down on the time and cost of an asteroid survey. The algorithm is based on the fact that some telescopes scan the sky at multiple wavelengths with a small time separation between the wavelength components. As a result, an object moving with sufficient speed will appear in different places in different wavelength components of the same image. Using image processing techniques we detect the centroids of points of light in the first component and compare these positions to the centroids in the other components using a nearest neighbor algorithm. The algorithm was used on a test set of 49 images obtained from the Sloan telescope in New Mexico and found 100% of known asteroids with only 3 false positives. This algorithm has the advantage of decreasing the amount of time required to perform an asteroid scan, thus allowing more sky to be scanned in the same amount of time or freeing a telescope for other pursuits.

  13. National Land Imaging Requirements (NLIR) Pilot Project summary report: summary of moderate resolution imaging user requirements

    USGS Publications Warehouse

    Vadnais, Carolyn; Stensaas, Gregory

    2014-01-01

    Under the National Land Imaging Requirements (NLIR) Project, the U.S. Geological Survey (USGS) is developing a functional capability to obtain, characterize, manage, maintain and prioritize all Earth observing (EO) land remote sensing user requirements. The goal is a better understanding of community needs that can be supported with land remote sensing resources, and a means to match needs with appropriate solutions in an effective and efficient way. The NLIR Project is composed of two components. The first component is focused on the development of the Earth Observation Requirements Evaluation System (EORES) to capture, store and analyze user requirements, whereas, the second component is the mechanism and processes to elicit and document the user requirements that will populate the EORES. To develop the second component, the requirements elicitation methodology was exercised and refined through a pilot project conducted from June to September 2013. The pilot project focused specifically on applications and user requirements for moderate resolution imagery (5–120 meter resolution) as the test case for requirements development. The purpose of this summary report is to provide a high-level overview of the requirements elicitation process that was exercised through the pilot project and an early analysis of the moderate resolution imaging user requirements acquired to date to support ongoing USGS sustainable land imaging study needs. The pilot project engaged a limited set of Federal Government users from the operational and research communities and therefore the information captured represents only a subset of all land imaging user requirements. However, based on a comparison of results, trends, and analysis, the pilot captured a strong baseline of typical applications areas and user needs for moderate resolution imagery. Because these results are preliminary and represent only a sample of users and application areas, the information from this report should only be used to indicate general user needs for the applications covered. Users of the information are cautioned that use of specific numeric results may be inappropriate without additional research. Any information used or cited from this report should specifically be cited as preliminary findings.

  14. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  15. Digital photography for the light microscope: results with a gated, video-rate CCD camera and NIH-image software.

    PubMed

    Shaw, S L; Salmon, E D; Quatrano, R S

    1995-12-01

    In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.

  16. Detection of micro solder balls using active thermography and probabilistic neural network

    NASA Astrophysics Data System (ADS)

    He, Zhenzhi; Wei, Li; Shao, Minghui; Lu, Xingning

    2017-03-01

    Micro solder ball/bump has been widely used in electronic packaging. It has been challenging to inspect these structures as the solder balls/bumps are often embedded between the component and substrates, especially in flip-chip packaging. In this paper, a detection method for micro solder ball/bump based on the active thermography and the probabilistic neural network is investigated. A VH680 infrared imager is used to capture the thermal image of the test vehicle, SFA10 packages. The temperature curves are processed using moving average technique to remove the peak noise. And the principal component analysis (PCA) is adopted to reconstruct the thermal images. The missed solder balls can be recognized explicitly in the second principal component image. Probabilistic neural network (PNN) is then established to identify the defective bump intelligently. The hot spots corresponding to the solder balls are segmented from the PCA reconstructed image, and statistic parameters are calculated. To characterize the thermal properties of solder bump quantitatively, three representative features are selected and used as the input vector in PNN clustering. The results show that the actual outputs and the expected outputs are consistent in identification of the missed solder balls, and all the bumps were recognized accurately, which demonstrates the viability of the PNN in effective defect inspection in high-density microelectronic packaging.

  17. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  18. Teaching resources for dermatology on the WWW--quiz system and dynamic lecture scripts using a HTTP-database demon.

    PubMed Central

    Bittorf, A.; Diepgen, T. L.

    1996-01-01

    The World Wide Web (WWW) is becoming the major way of acquiring information in all scientific disciplines as well as in business. It is very well suitable for fast distribution and exchange of up to date teaching resources. However, to date most teaching applications on the Web do not use its full power by integrating interactive components. We have set up a computer based training (CBT) framework for Dermatology, which consists of dynamic lecture scripts, case reports, an atlas and a quiz system. All these components heavily rely on an underlying image database that permits the creation of dynamic documents. We used a demon process that keeps the database open and can be accessed using HTTP to achieve better performance and avoid the overhead involved by starting CGI-processes. The result of our evaluation was very encouraging. Images Figure 3 PMID:8947625

  19. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  20. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  1. Macula segmentation and fovea localization employing image processing and heuristic based clustering for automated retinal screening.

    PubMed

    R, GeethaRamani; Balasubramanian, Lakshmi

    2018-07-01

    Macula segmentation and fovea localization is one of the primary tasks in retinal analysis as they are responsible for detailed vision. Existing approaches required segmentation of retinal structures viz. optic disc and blood vessels for this purpose. This work avoids knowledge of other retinal structures and attempts data mining techniques to segment macula. Unsupervised clustering algorithm is exploited for this purpose. Selection of initial cluster centres has a great impact on performance of clustering algorithms. A heuristic based clustering in which initial centres are selected based on measures defining statistical distribution of data is incorporated in the proposed methodology. The initial phase of proposed framework includes image cropping, green channel extraction, contrast enhancement and application of mathematical closing. Then, the pre-processed image is subjected to heuristic based clustering yielding a binary map. The binary image is post-processed to eliminate unwanted components. Finally, the component which possessed the minimum intensity is finalized as macula and its centre constitutes the fovea. The proposed approach outperforms existing works by reporting that 100%,of HRF, 100% of DRIVE, 96.92% of DIARETDB0, 97.75% of DIARETDB1, 98.81% of HEI-MED, 90% of STARE and 99.33% of MESSIDOR images satisfy the 1R criterion, a standard adopted for evaluating performance of macula and fovea identification. The proposed system thus helps the ophthalmologists in identifying the macula thereby facilitating to identify if any abnormality is present within the macula region. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Color enhancement of landsat agricultural imagery: JPL LACIE image processing support task

    NASA Technical Reports Server (NTRS)

    Madura, D. P.; Soha, J. M.; Green, W. B.; Wherry, D. B.; Lewis, S. D.

    1978-01-01

    Color enhancement techniques were applied to LACIE LANDSAT segments to determine if such enhancement can assist analysis in crop identification. The procedure involved increasing the color range by removing correlation between components. First, a principal component transformation was performed, followed by contrast enhancement to equalize component variances, followed by an inverse transformation to restore familiar color relationships. Filtering was applied to lower order components to reduce color speckle in the enhanced products. Use of single acquisition and multiple acquisition statistics to control the enhancement were compared, and the effects of normalization investigated. Evaluation is left to LACIE personnel.

  3. High resolution imaging of latent fingerprints by localized corrosion on brass surfaces.

    PubMed

    Goddard, Alex J; Hillman, A Robert; Bond, John W

    2010-01-01

    The Atomic Force Microscope (AFM) is capable of imaging fingerprint ridges on polished brass substrates at an unprecedented level of detail. While exposure to elevated humidity at ambient or slightly raised temperatures does not change the image appreciably, subsequent brief heating in a flame results in complete loss of the sweat deposit and the appearance of pits and trenches. Localized elemental analysis (using EDAX, coupled with SEM imaging) shows the presence of the constituents of salt in the initial deposits. Together with water and atmospheric oxygen--and with thermal enhancement--these are capable of driving a surface corrosion process. This process is sufficiently localized that it has the potential to generate a durable negative topographical image of the fingerprint. AFM examination of surface regions between ridges revealed small deposits (probably microscopic "spatter" of sweat components or transferred particulates) that may ultimately limit the level of ridge detail analysis.

  4. Heuristic Enhancement of Magneto-Optical Images for NDE

    NASA Astrophysics Data System (ADS)

    Cacciola, Matteo; Megali, Giuseppe; Pellicanò, Diego; Calcagno, Salvatore; Versaci, Mario; Morabito, FrancescoCarlo

    2010-12-01

    The quality of measurements in nondestructive testing and evaluation plays a key role in assessing the reliability of different inspection techniques. Each different technique, like the magneto-optic imaging here treated, is affected by some special types of noise which are related to the specific device used for their acquisition. Therefore, the design of even more accurate image processing is often required by relevant applications, for instance, in implementing integrated solutions for flaw detection and characterization. The aim of this paper is to propose a preprocessing procedure based on independent component analysis (ICA) to ease the detection of rivets and/or flaws in the specimens under test. A comparison of the proposed approach with some other advanced image processing methodologies used for denoising magneto-optic images (MOIs) is carried out, in order to show advantages and weakness of ICA in improving the accuracy and performance of the rivets/flaw detection.

  5. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    PubMed

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Flexible distributed architecture for semiconductor process control and experimentation

    NASA Astrophysics Data System (ADS)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  7. MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamurthy, R.

    This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI includingmore » a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice.« less

  8. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  9. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such as product inspection or assembly of parts in space and industry.

  10. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  11. In-vivo segmentation and quantification of coronary lesions by optical coherence tomography images for a lesion type definition and stenosis grading.

    PubMed

    Celi, Simona; Berti, Sergio

    2014-10-01

    Optical coherence tomography (OCT) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels. This technique is particularly useful for studying coronary atherosclerosis. In this paper, we present a new framework that allows a segmentation and quantification of OCT images of coronary arteries to define the plaque type and stenosis grading. These analyses are usually carried out on-line on the OCT-workstation where measuring is mainly operator-dependent and mouse-based. The aim of this program is to simplify and improve the processing of OCT images for morphometric investigations and to present a fast procedure to obtain 3D geometrical models that can also be used for external purposes such as for finite element simulations. The main phases of our toolbox are the lumen segmentation and the identification of the main tissues in the artery wall. We validated the proposed method with identification and segmentation manually performed by expert OCT readers. The method was evaluated on ten datasets from clinical routine and the validation was performed on 210 images randomly extracted from the pullbacks. Our results show that automated segmentation of the vessel and of the tissue components are possible off-line with a precision that is comparable to manual segmentation for the tissue component and to the proprietary-OCT-console for the lumen segmentation. Several OCT sections have been processed to provide clinical outcome. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Automatic 3D segmentation of multiphoton images: a key step for the quantification of human skin.

    PubMed

    Decencière, Etienne; Tancrède-Bohin, Emmanuelle; Dokládal, Petr; Koudoro, Serge; Pena, Ana-Maria; Baldeweck, Thérèse

    2013-05-01

    Multiphoton microscopy has emerged in the past decade as a useful noninvasive imaging technique for in vivo human skin characterization. However, it has not been used until now in evaluation clinical trials, mainly because of the lack of specific image processing tools that would allow the investigator to extract pertinent quantitative three-dimensional (3D) information from the different skin components. We propose a 3D automatic segmentation method of multiphoton images which is a key step for epidermis and dermis quantification. This method, based on the morphological watershed and graph cuts algorithms, takes into account the real shape of the skin surface and of the dermal-epidermal junction, and allows separating in 3D the epidermis and the superficial dermis. The automatic segmentation method and the associated quantitative measurements have been developed and validated on a clinical database designed for aging characterization. The segmentation achieves its goals for epidermis-dermis separation and allows quantitative measurements inside the different skin compartments with sufficient relevance. This study shows that multiphoton microscopy associated with specific image processing tools provides access to new quantitative measurements on the various skin components. The proposed 3D automatic segmentation method will contribute to build a powerful tool for characterizing human skin condition. To our knowledge, this is the first 3D approach to the segmentation and quantification of these original images. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.

  13. The interactive processes of accommodation and vergence.

    PubMed

    Semmlow, J L; Bérard, P V; Vercher, J L; Putteman, A; Gauthier, G M

    1994-01-01

    A near target generates two different, though related stimuli: image disparity and image blur. Fixation of that near target evokes three motor responses: the so-called oculomotor "near triad". It has long been known that both disparity and blur stimuli are each capable of independently generating all three responses, and a recent theory of near triad control (the Dual Interactive Theory) describes how these stimulus components normally work together in the aid of near vision. However, this theory also indicates that when the system becomes unbalanced, as in high AC/A ratios of some accommodative esotropes, the two components will become antagonistic. In this situation, the interaction between the blur and disparity driven components exaggerates the imbalance created in the vergence motor output. Conversely, there is enhanced restoration when the AC/A ratio is effectively reduced surgically.

  14. Characterization of selected elementary motion detector cells to image primitives.

    PubMed

    Benson, Leslie A; Barrett, Steven F; Wright, Cameron H G

    2008-01-01

    Developing a visual sensing system, complete with motion processing hardware and software would have many applications to current technology. It could be mounted on many autonomous vehicles to provide information about the navigational environment, as well as obstacle avoidance features. Incorporating the motion processing capabilities into the sensor requires a new approach to the algorithm implementation. This research, and that of many others, have turned to nature for inspiration. Elementary motion detector (EMD) cells are involved in a biological preprocessing network to provide information to the motion processing lobes of the house degrees y Musca domestica. This paper describes the response of the photoreceptor inputs to the EMDs. The inputs to the EMD components are tested as they are stimulated with varying image primitives. This is the first of many steps in characterizing the EMD response to image primitives.

  15. The diffusion tensor imaging (DTI) component of the NIH MRI study of normal brain development (PedsDTI).

    PubMed

    Walker, Lindsay; Chang, Lin-Ching; Nayak, Amritha; Irfanoglu, M Okan; Botteron, Kelly N; McCracken, James; McKinstry, Robert C; Rivkin, Michael J; Wang, Dah-Jyuu; Rumsey, Judith; Pierpaoli, Carlo

    2016-01-01

    The NIH MRI Study of normal brain development sought to characterize typical brain development in a population of infants, toddlers, children and adolescents/young adults, covering the socio-economic and ethnic diversity of the population of the United States. The study began in 1999 with data collection commencing in 2001 and concluding in 2007. The study was designed with the final goal of providing a controlled-access database; open to qualified researchers and clinicians, which could serve as a powerful tool for elucidating typical brain development and identifying deviations associated with brain-based disorders and diseases, and as a resource for developing computational methods and image processing tools. This paper focuses on the DTI component of the NIH MRI study of normal brain development. In this work, we describe the DTI data acquisition protocols, data processing steps, quality assessment procedures, and data included in the database, along with database access requirements. For more details, visit http://www.pediatricmri.nih.gov. This longitudinal DTI dataset includes raw and processed diffusion data from 498 low resolution (3 mm) DTI datasets from 274 unique subjects, and 193 high resolution (2.5 mm) DTI datasets from 152 unique subjects. Subjects range in age from 10 days (from date of birth) through 22 years. Additionally, a set of age-specific DTI templates are included. This forms one component of the larger NIH MRI study of normal brain development which also includes T1-, T2-, proton density-weighted, and proton magnetic resonance spectroscopy (MRS) imaging data, and demographic, clinical and behavioral data. Published by Elsevier Inc.

  16. Detection of defects in laser powder deposition (LPD) components by pulsed laser transient thermography

    NASA Astrophysics Data System (ADS)

    Santospirito, S. P.; Słyk, Kamil; Luo, Bin; Łopatka, Rafał; Gilmour, Oliver; Rudlin, John

    2013-05-01

    Detection of defects in Laser Powder Deposition (LPD) produced components has been achieved by laser thermography. An automatic in-process NDT defect detection software system has been developed for the analysis of laser thermography to automatically detect, reliably measure and then sentence defects in individual beads of LPD components. A deposition path profile definition has been introduced so all laser powder deposition beads can be modeled, and the inspection system has been developed to automatically generate an optimized inspection plan in which sampling images follow the deposition track, and automatically control and communicate with robot-arms, the source laser and cameras to implement image acquisition. Algorithms were developed so that the defect sizes can be correctly evaluated and these have been confirmed using test samples. Individual inspection images can also be stitched together for a single bead, a layer of beads or multiple layers of beads so that defects can be mapped through the additive process. A mathematical model was built up to analyze and evaluate the movement of heat throughout the inspection bead. Inspection processes were developed and positional and temporal gradient algorithms have been used to measure the flaw sizes. Defect analysis is then performed to determine if the defect(s) can be further classified (crack, lack of fusion, porosity) and the sentencing engine then compares the most significant defect or group of defects against the acceptance criteria - independent of human decisions. Testing on manufactured defects from the EC funded INTRAPID project has successful detected and correctly sentenced all samples.

  17. Multispectral Digital Image Analysis of Varved Sediments in Thin Sections

    NASA Astrophysics Data System (ADS)

    Jäger, K.; Rein, B.; Dietrich, S.

    2006-12-01

    An update of the recently developed method COMPONENTS (Rein, 2003, Rein & Jäger, subm.) for the discrimination of sediment components in thin sections is presented here. COMPONENTS uses a 6-band (multispectral) image analysis. To derive six-band spectral information of the sediments, thin sections are scanned with a digital camera mounted on a polarizing microscope. The thin sections are scanned twice, under polarized and under unpolarized plain light. During each run RGB images are acquired which are subsequently stacked to a six-band file. The first three bands (Blue=1, Green=2, Red=3) result from the spectral behaviour in the blue, green and red band with unpolarized light conditions, and the bands 4 to 6 (Blue=4, Green=5, Red=6) from the polarized light run. The next step is the discrimination of the sediment components by their transmission behaviour. Automatic classification algorithms broadly used in remote sensing applications cannot be used due to unavoidable variations of sediment particle or thin section thicknesses that change absolute grey values of the sediment components. Thus, we use an approach based on band ratios, also known as indices. By using band ratios, the grey values measured in different bands are normalized against each other and illumination variations (e.g. thickness variations) are eliminated. By combining specific ratios we are able to detect all seven major components in the investigated sediments (carbonates, diatoms, fine clastic material, plant rests, pyrite, quartz and resin). Then, the classification results (compositional maps) are validated. Although the automatic classification and the analogous classification show high concordances, some systematic errors could be identified. For example, the transition zone between the sediment and resin filled cracks is classified as fine clastic material and very coarse carbonates are partly classified as quartz because coarse carbonates can be very bright and spectra are partly saturated (grey value 255). With reduced illumination intensity "carbonate image pixels" get unsaturated and can be well distinguished from quartz grains. During the evaluation process we identify all falsely classified areas using neighbourhood matrices and reclassify them. Finally, we use filter techniques to derive downcore component frequencies from the classified thin section images for variable thick virtual samples. The filter conducts neighbourhood analyses. After filtering, each pixel of the filtered images carries the information about the frequency of any given component in a defined neighbourhood around (virtual sampling). References Rein, B. (2003) In-situ Reflektionsspektroskopie und digitale Bildanalyse Gewinnung hochauflösender Paläoumweltdaten mit fernerkundlichen Methoden, Habilitation Thesis, Univ. Mainz, 104 p. Jäger, K. and Rein, B. (2005): Identifying varve components using digital image analysis techniques. - in: Heidi Haas, Karl Ramseyer & Fritz Schlunegger (eds.): Sediment 2005 (18th-20th July 2005), Schriftenreihe der Deutschen Gesellschaft für Geowissenschaften, 38, p. 81 Rein, B. and Jäger, K. (subm.) COMPONENTS - Sediment component detection in thin sections by multispectral digital image analysis. Sedimentology.

  18. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles

    PubMed Central

    Hsu, Bing-Cheng

    2018-01-01

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme. PMID:29757940

  19. Understanding and improving optical coherence tomography imaging depth in selective laser sintering nylon 12 parts and powder

    NASA Astrophysics Data System (ADS)

    Lewis, Adam D.; Katta, Nitesh; McElroy, Austin; Milner, Thomas; Fish, Scott; Beaman, Joseph

    2018-04-01

    Optical coherence tomography (OCT) has shown promise as a process sensor in selective laser sintering (SLS) due to its ability to yield depth-resolved data not attainable with conventional sensors. However, OCT images of nylon 12 powder and nylon 12 components fabricated via SLS contain artifacts that have not been previously investigated in the literature. A better understanding of light interactions with SLS powder and components is foundational for further research expanding the utility of OCT imaging in SLS and other additive manufacturing (AM) sensing applications. Specifically, in this work, nylon powder and sintered parts were imaged in air and in an index matching liquid. Subsequent image analysis revealed the cause of "signal-tail" OCT image artifacts to be a combination of both inter and intraparticle multiple-scattering and reflections. Then, the OCT imaging depth of nylon 12 powder and the contrast-to-noise ratio of a sintered part were improved through the use of an index matching liquid. Finally, polymer crystals were identified as the main source of intraparticle scattering in nylon 12 powder. Implications of these results on future research utilizing OCT in SLS are also given.

  20. Field-Portable Pixel Super-Resolution Colour Microscope

    PubMed Central

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742

  1. Field-portable pixel super-resolution colour microscope.

    PubMed

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.

  2. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles.

    PubMed

    Lin, Chi-Ying; Hsu, Bing-Cheng

    2018-05-14

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.

  3. Novel medical image enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  4. Animal Detection in Natural Images: Effects of Color and Image Database

    PubMed Central

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  5. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    PubMed Central

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-01

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313

  6. Tracking Equilibrium and Nonequilibrium Shifts in Data with TREND.

    PubMed

    Xu, Jia; Van Doren, Steven R

    2017-01-24

    Principal component analysis (PCA) discovers patterns in multivariate data that include spectra, microscopy, and other biophysical measurements. Direct application of PCA to crowded spectra, images, and movies (without selecting peaks or features) was shown recently to identify their equilibrium or temporal changes. To enable the community to utilize these capabilities with a wide range of measurements, we have developed multiplatform software named TREND to Track Equilibrium and Nonequilibrium population shifts among two-dimensional Data frames. TREND can also carry this out by independent component analysis. We highlight a few examples of finding concurrent processes. TREND extracts dual phases of binding to two sites directly from the NMR spectra of the titrations. In a cardiac movie from magnetic resonance imaging, TREND resolves principal components (PCs) representing breathing and the cardiac cycle. TREND can also reconstruct the series of measurements from selected PCs, as illustrated for a biphasic, NMR-detected titration and the cardiac MRI movie. Fidelity of reconstruction of series of NMR spectra or images requires more PCs than needed to plot the largest population shifts. TREND reads spectra from many spectroscopies in the most common formats (JCAMP-DX and NMR) and multiple movie formats. The TREND package thus provides convenient tools to resolve the processes recorded by diverse biophysical methods. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. MTI science, data products, and ground-data processing overview

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Atkins, William H.; Balick, Lee K.; Borel, Christoph C.; Clodius, William B.; Christensen, R. Wynn; Davis, Anthony B.; Echohawk, J. C.; Galbraith, Amy E.; Hirsch, Karen L.; Krone, James B.; Little, Cynthia K.; McLachlan, Peter M.; Morrison, Aaron; Pollock, Kimberly A.; Pope, Paul A.; Novak, Curtis; Ramsey, Keri A.; Riddle, Emily E.; Rohde, Charles A.; Roussel-Dupre, Diane C.; Smith, Barham W.; Smith, Kathy; Starkovich, Kim; Theiler, James P.; Weber, Paul G.

    2001-08-01

    The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.

  8. Spectral and Geological Characterization of Beach Components in Northern Puerto Rico

    NASA Astrophysics Data System (ADS)

    Caraballo Álvarez, I. O.; Torres-Perez, J. L.; Barreto, M.

    2015-12-01

    Understanding how changes in beach components may reflect beach processes is essential since variations along beach profiles can shed light on river and ocean processes influencing beach sedimentation and beachrock formation. It is likely these influences are related to beach proximity within the Río Grande de Manatí river mouth. Therefore, this study focuses on characterizing beach components at two sites in Manatí, Puerto Rico. Playa Machuca and Playa Tombolo, which are separated by eolianites, differ greatly in sediment size, mineralogy, and beachrock morphology. Several approaches were taken to geologically and spectrally characterize main beach components at each site. These approaches included field and microscopic laboratory identification, granulometry, and a comparison between remote sensing reflectance (Rrs) obtained with a field spectroradiometer and pre-existing spectral library signatures. Preliminary results indicate a positive correlation between each method. This study may help explore the possibility of using only Rrs to characterize beach and shallow submarine components for detailed image analysis and management of coastal features.This study focuses on characterizing beach components at two sites in Manatí, Puerto Rico. Playa Machuca and Playa Tombolo, two beaches that are separated by eolianites, differ greatly in sediment size and mineralogy, as well as in beachrock morphology. Understanding how changes in beach components may reflect beach processes is essential, since it is likely that differences are mostly related to each beaches' proximity to the Río Grande de Manatí river mouth. Hence, changes in components along beach profiles can shed light on the river's and the ocean's influence on beach sedimentation and beachrock formation. Several approaches were taken to properly geologically and spectrally characterize the main beach components at each site. These approaches included field and microscopic laboratory identification, granulometry, and a comparison between remote sensing reflectance (Rrs) obtained with a field spectroradiometer and the ENVI spectral library. Preliminary results show a positive correlation between each method. This study may help explore the possibility of using only Rrs to characterize beach and shallow submarine components for detailed image analysis and management of coastal features.

  9. Hierarchical content-based image retrieval by dynamic indexing and guided search

    NASA Astrophysics Data System (ADS)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  10. Advances in optical information processing IV; Proceedings of the Meeting, Orlando, FL, Apr. 18-20, 1990

    NASA Astrophysics Data System (ADS)

    Pape, Dennis R.

    1990-09-01

    The present conference discusses topics in optical image processing, optical signal processing, acoustooptic spectrum analyzer systems and components, and optical computing. Attention is given to tradeoffs in nonlinearly recorded matched filters, miniature spatial light modulators, detection and classification using higher-order statistics of optical matched filters, rapid traversal of an image data base using binary synthetic discriminant filters, wideband signal processing for emitter location, an acoustooptic processor for autonomous SAR guidance, and sampling of Fresnel transforms. Also discussed are an acoustooptic RF signal-acquisition system, scanning acoustooptic spectrum analyzers, the effects of aberrations on acoustooptic systems, fast optical digital arithmetic processors, information utilization in analog and digital processing, optical processors for smart structures, and a self-organizing neural network for unsupervised learning.

  11. Adaptive non-local means on local principle neighborhood for noise/artifacts reduction in low-dose CT images.

    PubMed

    Zhang, Yuanke; Lu, Hongbing; Rong, Junyan; Meng, Jing; Shang, Junliang; Ren, Pinghong; Zhang, Junying

    2017-09-01

    Low-dose CT (LDCT) technique can reduce the x-ray radiation exposure to patients at the cost of degraded images with severe noise and artifacts. Non-local means (NLM) filtering has shown its potential in improving LDCT image quality. However, currently most NLM-based approaches employ a weighted average operation directly on all neighbor pixels with a fixed filtering parameter throughout the NLM filtering process, ignoring the non-stationary noise nature of LDCT images. In this paper, an adaptive NLM filtering scheme on local principle neighborhoods (PC-NLM) is proposed for structure-preserving noise/artifacts reduction in LDCT images. Instead of using neighboring patches directly, in the PC-NLM scheme, the principle component analysis (PCA) is first applied on local neighboring patches of the target patch to decompose the local patches into uncorrelated principle components (PCs), then a NLM filtering is used to regularize each PC of the target patch and finally the regularized components is transformed to get the target patch in image domain. Especially, in the NLM scheme, the filtering parameter is estimated adaptively from local noise level of the neighborhood as well as the signal-to-noise ratio (SNR) of the corresponding PC, which guarantees a "weaker" NLM filtering on PCs with higher SNR and a "stronger" filtering on PCs with lower SNR. The PC-NLM procedure is iteratively performed several times for better removal of the noise and artifacts, and an adaptive iteration strategy is developed to reduce the computational load by determining whether a patch should be processed or not in next round of the PC-NLM filtering. The effectiveness of the presented PC-NLM algorithm is validated by experimental phantom studies and clinical studies. The results show that it can achieve promising gain over some state-of-the-art methods in terms of artifact suppression and structure preservation. With the use of PCA on local neighborhoods to extract principal structural components, as well as adaptive NLM filtering on PCs of the target patch using filtering parameter estimated based on the local noise level and corresponding SNR, the proposed PC-NLM method shows its efficacy in preserving fine anatomical structures and suppressing noise/artifacts in LDCT images. © 2017 American Association of Physicists in Medicine.

  12. Processing of odor mixtures in the zebrafish olfactory bulb.

    PubMed

    Tabor, Rico; Yaksi, Emre; Weislogel, Jan-Marek; Friedrich, Rainer W

    2004-07-21

    Components of odor mixtures often are not perceived individually, suggesting that neural representations of mixtures are not simple combinations of the representations of the components. We studied odor responses to binary mixtures of amino acids and food extracts at different processing stages in the olfactory bulb (OB) of zebrafish. Odor-evoked input to the OB was measured by imaging Ca2+ signals in afferents to olfactory glomeruli. Activity patterns evoked by mixtures were predictable within narrow limits from the component patterns, indicating that mixture interactions in the peripheral olfactory system are weak. OB output neurons, the mitral cells (MCs), were recorded extra- and intracellularly and responded to odors with stimulus-dependent temporal firing rate modulations. Responses to mixtures of amino acids often were dominated by one of the component responses. Responses to mixtures of food extracts, in contrast, were more distinct from both component responses. These results show that mixture interactions can result from processing in the OB. Moreover, our data indicate that mixture interactions in the OB become more pronounced with increasing overlap of input activity patterns evoked by the components. Emerging from these results are rules of mixture interactions that may explain behavioral data and provide a basis for understanding the processing of natural odor stimuli in the OB.

  13. A mixture neural net for multispectral imaging spectrometer processing

    NASA Technical Reports Server (NTRS)

    Casasent, David; Slagle, Timothy

    1990-01-01

    Each spatial region viewed by an imaging spectrometer contains various elements in a mixture. The elements present and the amount of each are to be determined. A neural net solution is considered. Initial optical neural net hardware is described. The first simulations on the component requirements of a neural net are considered. The pseudoinverse solution is shown to not suffice, i.e. a neural net solution is required.

  14. Workflow-enabled distributed component-based information architecture for digital medical imaging enterprises.

    PubMed

    Wong, Stephen T C; Tjandra, Donny; Wang, Huili; Shen, Weimin

    2003-09-01

    Few information systems today offer a flexible means to define and manage the automated part of radiology processes, which provide clinical imaging services for the entire healthcare organization. Even fewer of them provide a coherent architecture that can easily cope with heterogeneity and inevitable local adaptation of applications and can integrate clinical and administrative information to aid better clinical, operational, and business decisions. We describe an innovative enterprise architecture of image information management systems to fill the needs. Such a system is based on the interplay of production workflow management, distributed object computing, Java and Web techniques, and in-depth domain knowledge in radiology operations. Our design adapts the approach of "4+1" architectural view. In this new architecture, PACS and RIS become one while the user interaction can be automated by customized workflow process. Clinical service applications are implemented as active components. They can be reasonably substituted by applications of local adaptations and can be multiplied for fault tolerance and load balancing. Furthermore, the workflow-enabled digital radiology system would provide powerful query and statistical functions for managing resources and improving productivity. This paper will potentially lead to a new direction of image information management. We illustrate the innovative design with examples taken from an implemented system.

  15. Fast principal component analysis for stacking seismic data

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  16. A physics based method for combining multiple anatomy models with application to medical simulation.

    PubMed

    Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David

    2009-01-01

    We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.

  17. What Not To Do: Anti-patterns for Developing Scientific Workflow Software Components

    NASA Astrophysics Data System (ADS)

    Futrelle, J.; Maffei, A. R.; Sosik, H. M.; Gallager, S. M.; York, A.

    2013-12-01

    Scientific workflows promise to enable efficient scaling-up of researcher code to handle large datasets and workloads, as well as documentation of scientific processing via standardized provenance records, etc. Workflow systems and related frameworks for coordinating the execution of otherwise separate components are limited, however, in their ability to overcome software engineering design problems commonly encountered in pre-existing components, such as scripts developed externally by scientists in their laboratories. In practice, this often means that components must be rewritten or replaced in a time-consuming, expensive process. In the course of an extensive workflow development project involving large-scale oceanographic image processing, we have begun to identify and codify 'anti-patterns'--problematic design characteristics of software--that make components fit poorly into complex automated workflows. We have gone on to develop and document low-effort solutions and best practices that efficiently address the anti-patterns we have identified. The issues, solutions, and best practices can be used to evaluate and improve existing code, as well as guiding the development of new components. For example, we have identified a common anti-pattern we call 'batch-itis' in which a script fails and then cannot perform more work, even if that work is not precluded by the failure. The solution we have identified--removing unnecessary looping over independent units of work--is often easier to code than the anti-pattern, as it eliminates the need for complex control flow logic in the component. Other anti-patterns we have identified are similarly easy to identify and often easy to fix. We have drawn upon experience working with three science teams at Woods Hole Oceanographic Institution, each of which has designed novel imaging instruments and associated image analysis code. By developing use cases and prototypes within these teams, we have undertaken formal evaluations of software components developed by programmers with widely varying levels of expertise, and have been able to discover and characterize a number of anti-patterns. Our evaluation methodology and testbed have also enabled us to assess the efficacy of strategies to address these anti-patterns according to scientifically relevant metrics, such as ability of algorithms to perform faster than the rate of data acquisition and the accuracy of workflow component output relative to ground truth. The set of anti-patterns and solutions we have identified augments of the body of more well-known software engineering anti-patterns by addressing additional concerns that obtain when a software component has to function as part of a workflow assembled out of independently-developed codebases. Our experience shows that identifying and resolving these anti-patterns reduces development time and improves performance without reducing component reusability.

  18. Novel techniques with multiphoton microscopy: Deep-brain imaging with microprisms, neurometabolism of epilepsy, and counterfeit paper money detection

    NASA Astrophysics Data System (ADS)

    Chia, Thomas H.

    Multiphoton microscopy is a laser-scanning fluorescence imaging method with extraordinary potential. We describe three innovative multiphoton microscopy techniques across various disciplines. Traditional in vivo fluorescence microscopy of the mammalian brain has a limited penetration depth (<400 microm). We present a method of imaging 1 mm deep into mouse neocortex by using a glass microprism to relay the excitation and emission light. This technique enables simultaneous imaging of multiple cortical layers, including layer V, at an angle typical of slice preparations. At high-magnification imaging using an objective with 1-mm of coverglass correction, resolution was sufficient to resolve dendritic spines on layer V GFP neurons. Functional imaging of blood flow at various neocortical depths is also presented, allowing for quantification of red blood cell flux and velocity. Multiphoton fluorescence lifetime imaging (FLIM) of NADH reveals information on neurometabolism. NADH, an intrinsic fluorescent molecule and ubiquitous metabolic coenzyme, has a lifetime dependent on enzymatic binding. A novel NADH FLIM algorithm is presented that produces images showing spatially distinct NADH fluorescence lifetimes in mammalian brain slices. This program provides advantages over traditional FLIM processing of multi-component lifetime data. We applied this technique to a GFP-GFAP pilocarpine mouse model of temporal lobe epilepsy. Results indicated significant changes in the neurometabolism of astrocytes and neuropil in the cell and dendritic layers of the hippocampus when compared to control tissue. Data obtained with NADH FLIM were subsequently interpreted based on the abnormal activity reported in epileptic tissue. Genuine U.S. Federal Reserve Notes have a consistent, two-component intrinsic fluorescence lifetime. This allows for detection of counterfeit paper money because of its significant differences in fluorescence lifetime when compared to genuine paper money. We used scanning multiphoton laser excitation to sample a ˜4 mm2 region from 54 genuine Reserve Notes. Three types of counterfeit samples were tested. Four out of the nine counterfeit samples fit to a one-component decay. Five out of nine counterfeit samples fit to a two-component model, but are identified as counterfeit due to significant deviations in the longer lifetime component compared to genuine bills.

  19. Computed Tomography and Thermography Increases CMC Material and Process Development Efficiency and Testing Effectiveness

    NASA Technical Reports Server (NTRS)

    Effinger, Michael; Beshears, Ron; Hufnagle, David; Walker, James; Russell, Sam; Stowell, Bob; Myers, David

    2002-01-01

    Nondestructive characterization techniques have been used to steer development and testing of CMCs. Computed tomography is used to determine the volumetric integrity of the CMC plates and components. Thermography is used to determine the near surface integrity of the CMC plates and components. For process and material development, information such as density uniformity, part delamination, and dimensional tolerance conformity is generated. The information from the thermography and computed tomography is correlated and then specimen cutting maps are superimposed on the thermography images. This enables for tighter data and potential explanation of off nominal test data. Examples of nondestructive characterization utilization to make decisions in process and material development and testing are presented.

  20. Delay-Encoded Harmonic Imaging (DE-HI) in Multiplane-Wave Compounding.

    PubMed

    Gong, Ping; Song, Pengfei; Chen, Shigao

    2017-04-01

    The development of ultrafast ultrasound imaging brings great opportunities to improve imaging technologies such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, several tilted plane or diverging wave images are coherently combined to form a compounded image, leading to trade-offs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Multiplane wave (MW) imaging is proposed to solve this trade-off by encoding multiple plane waves with Hadamard matrix during one transmission event (i.e. pulse-echo event), to improve image SNR without sacrificing the resolution or frame rate. However, it suffers from stronger reverberation artifacts in B-mode images compared to standard plane wave compounding due to longer transmitted pulses. If harmonic imaging can be combined with MW imaging, the reverberation artifacts and other clutter noises such as sidelobes and multipath scattering clutters should be suppressed. The challenge, however, is that the Hadamard codes used in MW imaging cannot encode the 2 nd harmonic component by inversing the pulse polarity. In this paper, we propose a delay-encoded harmonic imaging (DE-HI) technique to encode the 2 nd harmonic with a one quarter period delay calculated at the transmit center frequency, rather than reversing the pulse polarity during multiplane wave emissions. Received DE-HI signals can then be decoded in the frequency domain to recover the signals as in single plane wave emissions, but mainly with improved SNR at the 2 nd harmonic component instead of the fundamental component. DE-HI was tested experimentally with a point target, a B-mode imaging phantom, and in-vivo human liver imaging. Improvements in image contrast-to-noise ratio (CNR), spatial resolution, and lesion-signal-to-noise ratio ( l SNR) have been achieved compared to standard plane wave compounding, MW imaging, and standard harmonic imaging (maximal improvement of 116% on CNR and 115% on l SNR as compared to standard HI around 55 mm depth in the B-mode imaging phantom study). The potential high frame rate and the stability of encoding and decoding processes of DE-HI were also demonstrated, which made DE-HI promising for a wide spectrum of imaging applications.

  1. Temporally flickering nanoparticles for compound cellular imaging and super resolution

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Danan, Yossef; Meir, Rinat; Meiri, Amihai; Zalevsky, Zeev

    2016-03-01

    This work presents the use of flickering nanoparticles for imaging biological samples. The method has high noise immunity, and it enables the detection of overlapping types of GNPs, at significantly sub-diffraction distances, making it attractive for super resolving localization microscopy techniques. The method utilizes a lock-in technique at which the imaging of the sample is done using a time-modulated laser beam that match the number of the types of gold nanoparticles (GNPs) that label a given sample, and resulting in the excitation of the temporal flickering of the scattered light at known temporal frequencies. The final image where the GNPs are spatially separated is obtained using post processing where the proper spectral components corresponding to the different modulation frequencies are extracted. This allows the simultaneous super resolved imaging of multiple types of GNPs that label targets of interest within biological samples. Additionally applying the post-processing algorithm of the K-factor image decomposition algorithm can further improve the performance of the proposed approach.

  2. Distortion correction and cross-talk compensation algorithm for use with an imaging spectrometer based spatially resolved diffuse reflectance system

    NASA Astrophysics Data System (ADS)

    Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.

    2016-12-01

    Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.

  3. Photoacoustic imaging of lymphatic pumping

    NASA Astrophysics Data System (ADS)

    Forbrich, Alex; Heinmiller, Andrew; Zemp, Roger J.

    2017-10-01

    The lymphatic system is responsible for fluid homeostasis and immune cell trafficking and has been implicated in several diseases, including obesity, diabetes, and cancer metastasis. Despite its importance, the lack of suitable in vivo imaging techniques has hampered our understanding of the lymphatic system. This is, in part, due to the limited contrast of lymphatic fluids and structures. Photoacoustic imaging, in combination with optically absorbing dyes or nanoparticles, has great potential for noninvasively visualizing the lymphatic vessels deep in tissues. Multispectral photoacoustic imaging is capable of separating the components; however, the slow wavelength switching speed of most laser systems is inadequate for imaging lymphatic pumping without motion artifacts being introduced into the processed images. We investigate two approaches for visualizing lymphatic processes in vivo. First, single-wavelength differential photoacoustic imaging is used to visualize lymphatic pumping in the hindlimb of a mouse in real time. Second, a fast-switching multiwavelength photoacoustic imaging system was used to assess the propulsion profile of dyes through the lymphatics in real time. These approaches may have profound impacts in noninvasively characterizing and investigating the lymphatic system.

  4. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    PubMed

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  5. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  6. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  7. Design and fabrication of vertically-integrated CMOS image sensors.

    PubMed

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.

  8. Design and Fabrication of Vertically-Integrated CMOS Image Sensors

    PubMed Central

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860

  9. Non-Destructive Monitoring of Charge-Discharge Cycles on Lithium Ion Batteries using 7Li Stray-Field Imaging

    PubMed Central

    Tang, Joel A.; Dugar, Sneha; Zhong, Guiming; Dalal, Naresh S.; Zheng, Jim P.; Yang, Yong; Fu, Riqiang

    2013-01-01

    Magnetic resonance imaging provides a noninvasive method for in situ monitoring of electrochemical processes involved in charge/discharge cycling of batteries. Determining how the electrochemical processes become irreversible, ultimately resulting in degraded battery performance, will aid in developing new battery materials and designing better batteries. Here we introduce the use of an alternative in situ diagnostic tool to monitor the electrochemical processes. Utilizing a very large field-gradient in the fringe field of a magnet, stray-field-imaging (STRAFI) technique significantly improves the image resolution. These STRAFI images enable the real time monitoring of the electrodes at a micron level. It is demonstrated by two prototype half-cells, graphite∥Li and LiFePO4∥Li, that the high-resolution 7Li STRAFI profiles allow one to visualize in situ Li-ions transfer between the electrodes during charge/discharge cyclings as well as the formation and changes of irreversible microstructures of the Li components, and particularly reveal a non-uniform Li-ion distribution in the graphite. PMID:24005580

  10. Flexible imaging payload for real-time fluorescent biological imaging in parabolic, suborbital and space analog environments

    NASA Astrophysics Data System (ADS)

    Bamsey, Matthew T.; Paul, Anna-Lisa; Graham, Thomas; Ferl, Robert J.

    2014-10-01

    Fluorescent imaging offers the ability to monitor biological functions, in this case biological responses to space-related environments. For plants, fluorescent imaging can include general health indicators such as chlorophyll fluorescence as well as specific metabolic indicators such as engineered fluorescent reporters. This paper describes the Flex Imager a fluorescent imaging payload designed for Middeck Locker deployment and now tested on multiple flight and flight-related platforms. The Flex Imager and associated payload elements have been developed with a focus on 'flexibility' allowing for multiple imaging modalities and change-out of individual imaging or control components in the field. The imaging platform is contained within the standard Middeck Locker spaceflight form factor, with components affixed to a baseplate that permits easy rearrangement and fine adjustment of components. The Flex Imager utilizes standard software packages to simplify operation, operator training, and evaluation by flight provider flight test engineers, or by researchers processing the raw data. Images are obtained using a commercial cooled CCD image sensor, with light-emitting diodes for excitation and a suite of filters that allow biological samples to be imaged over wavelength bands of interest. Although baselined for the monitoring of green fluorescent protein and chlorophyll fluorescence from Arabidopsis samples, the Flex Imager payload permits imaging of any biological sample contained within a standard 10 cm by 10 cm square Petri plate. A sample holder was developed to secure sample plates under different flight profiles while permitting sample change-out should crewed operations be possible. In addition to crew-directed imaging, autonomous or telemetric operation of the payload is also a viable operational mode. An infrared camera has also been integrated into the Flex Imager payload to allow concurrent fluorescent and thermal imaging of samples. The Flex Imager has been utilized to assess, in real-time, the response of plants to novel environments including various spaceflight analogs, including several parabolic flight environments as well as hypobaric plant growth chambers. Basic performance results obtained under these operational environments, as well as laboratory-based tests are described. The Flex Imager has also been designed to be compatible with emerging suborbital platforms.

  11. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  12. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  13. Wear Scar Similarities between Retrieved and Simulator-Tested Polyethylene TKR Components: An Artificial Neural Network Approach

    PubMed Central

    2016-01-01

    The aim of this study was to determine how representative wear scars of simulator-tested polyethylene (PE) inserts compare with retrieved PE inserts from total knee replacement (TKR). By means of a nonparametric self-organizing feature map (SOFM), wear scar images of 21 postmortem- and 54 revision-retrieved components were compared with six simulator-tested components that were tested either in displacement or in load control according to ISO protocols. The SOFM network was then trained with the wear scar images of postmortem-retrieved components since those are considered well-functioning at the time of retrieval. Based on this training process, eleven clusters were established, suggesting considerable variability among wear scars despite an uncomplicated loading history inside their hosts. The remaining components (revision-retrieved and simulator-tested) were then assigned to these established clusters. Six out of five simulator components were clustered together, suggesting that the network was able to identify similarities in loading history. However, the simulator-tested components ended up in a cluster at the fringe of the map containing only 10.8% of retrieved components. This may suggest that current ISO testing protocols were not fully representative of this TKR population, and protocols that better resemble patients' gait after TKR containing activities other than walking may be warranted. PMID:27597955

  14. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  15. KAMEDIN: a telemedicine system for computer supported cooperative work and remote image analysis in radiology.

    PubMed

    Handels, H; Busch, C; Encarnação, J; Hahn, C; Kühn, V; Miehe, J; Pöppl, S I; Rinast, E; Rossmanith, C; Seibert, F; Will, A

    1997-03-01

    The software system KAMEDIN (Kooperatives Arbeiten und MEdizinische Diagnostik auf Innovativen Netzen) is a multimedia telemedicine system for exchange, cooperative diagnostics, and remote analysis of digital medical image data. It provides components for visualisation, processing, and synchronised audio-visual discussion of medical images. Techniques of computer supported cooperative work (CSCW) synchronise user interactions during a teleconference. Visibility of both local and remote cursor on the conference workstations facilitates telepointing and reinforces the conference partner's telepresence. Audio communication during teleconferences is supported by an integrated audio component. Furthermore, brain tissue segmentation with artificial neural networks can be performed on an external supercomputer as a remote image analysis procedure. KAMEDIN is designed as a low cost CSCW tool for ISDN based telecommunication. However it can be used on any TCP/IP supporting network. In a field test, KAMEDIN was installed in 15 clinics and medical departments to validate the systems' usability. The telemedicine system KAMEDIN has been developed, tested, and evaluated within a research project sponsored by German Telekom.

  16. Digital staining for histopathology multispectral images by the combined application of spectral enhancement and spectral transformation.

    PubMed

    Bautista, Pinky A; Yagi, Yukako

    2011-01-01

    In this paper we introduced a digital staining method for histopathology images captured with an n-band multispectral camera. The method consisted of two major processes: enhancement of the original spectral transmittance and the transformation of the enhanced transmittance to its target spectral configuration. Enhancement is accomplished by shifting the original transmittance with the scaled difference between the original transmittance and the transmittance estimated with m dominant principal component (PC) vectors;the m-PC vectors were determined from the transmittance samples of the background image. Transformation of the enhanced transmittance to the target spectral configuration was done using an nxn transformation matrix, which was derived by applying a least square method to the enhanced and target spectral training data samples of the different tissue components. Experimental results on the digital conversion of a hematoxylin and eosin (H&E) stained multispectral image to its Masson's trichrome stained (MT) equivalent shows the viability of the method.

  17. A secure online image trading system for untrusted cloud environments.

    PubMed

    Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi

    2015-01-01

    In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.

  18. Computer vision for microscopy diagnosis of malaria.

    PubMed

    Tek, F Boray; Dempster, Andrew G; Kale, Izzet

    2009-07-13

    This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  19. Infrared spectroscopic imaging for noninvasive detection of latent fingerprints.

    PubMed

    Crane, Nicole J; Bartick, Edward G; Perlman, Rebecca Schwartz; Huffman, Scott

    2007-01-01

    The capability of Fourier transform infrared (FTIR) spectroscopic imaging to provide detailed images of unprocessed latent fingerprints while also preserving important trace evidence is demonstrated. Unprocessed fingerprints were developed on various porous and nonporous substrates. Data-processing methods used to extract the latent fingerprint ridge pattern from the background material included basic infrared spectroscopic band intensities, addition and subtraction of band intensity measurements, principal components analysis (PCA) and calculation of second derivative band intensities, as well as combinations of these various techniques. Additionally, trace evidence within the fingerprints was recovered and identified.

  20. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA

    2008-10-14

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.

  1. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  2. LONI visualization environment.

    PubMed

    Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W

    2006-06-01

    Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.

  3. An intelligent architecture based on Field Programmable Gate Arrays designed to detect moving objects by using Principal Component Analysis.

    PubMed

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.

  4. A methodology for the semi-automatic digital image analysis of fragmental impactites

    NASA Astrophysics Data System (ADS)

    Chanou, A.; Osinski, G. R.; Grieve, R. A. F.

    2014-04-01

    A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.

  5. Varying-energy CT imaging method based on EM-TV

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Han, Yan

    2016-11-01

    For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.

  6. 77 FR 4059 - Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof; Receipt...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... Images, and Components Thereof; Receipt of Complaint; Solicitation of Comments Relating to the Public... Devices for Capturing and Transmitting Images, and Components Thereof, DN 2869; the Commission is... importation of certain electronic devices for capturing and transmitting images, and components thereof. The...

  7. Finger crease pattern recognition using Legendre moments and principal component analysis

    NASA Astrophysics Data System (ADS)

    Luo, Rongfang; Lin, Tusheng

    2007-03-01

    The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.

  8. An Open Source Low-Cost Automatic System for Image-Based 3d Digitization

    NASA Astrophysics Data System (ADS)

    Menna, F.; Nocerino, E.; Morabito, D.; Farella, E. M.; Perini, M.; Remondino, F.

    2017-11-01

    3D digitization of heritage artefacts, reverse engineering of industrial components or rapid prototyping-driven design are key topics today. Indeed, millions of archaeological finds all over the world need to be surveyed in 3D either to allow convenient investigations by researchers or because they are inaccessible to visitors and scientists or, unfortunately, because they are seriously endangered by wars and terrorist attacks. On the other hand, in case of industrial and design components there is often the need of deformation analyses or physical replicas starting from reality-based 3D digitisations. The paper is aligned with these needs and presents the realization of the ORION (arduinO Raspberry pI rOtating table for image based 3D recostructioN) prototype system, with its hardware and software components, providing critical insights about its modular design. ORION is an image-based 3D reconstruction system based on automated photogrammetric acquisitions and processing. The system is being developed under a collaborative educational project between FBK Trento, the University of Trento and internship programs with high school in the Trentino province (Italy).

  9. A component-based system for agricultural drought monitoring by remote sensing.

    PubMed

    Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.

  10. A component-based system for agricultural drought monitoring by remote sensing

    PubMed Central

    Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China’s Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring. PMID:29236700

  11. Extracting morphologies from third harmonic generation images of structurally normal human brain tissue.

    PubMed

    Zhang, Zhiqing; Kuzmin, Nikolay V; Groot, Marie Louise; de Munck, Jan C

    2017-06-01

    The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering, segmentation and validation, to extract this information challenging. We developed a salient edge-enhancing model of anisotropic diffusion for image filtering, based on higher order statistics. We split the intrinsic 3-phase segmentation problem into two 2-phase segmentation problems, each of which we solved with a dedicated model, active contour weighted by prior extreme. We applied the novel proposed algorithms to THG images of structurally normal ex-vivo human brain tissue, revealing key tissue components-brain cells, microvessels and neuropil, enabling statistical characterization of these components. Comprehensive comparison to manually delineated ground truth validated the proposed algorithms. Quantitative comparison to second harmonic generation/auto-fluorescence images, acquired simultaneously from the same tissue area, confirmed the correctness of the main THG features detected. The software and test datasets are available from the authors. z.zhang@vu.nl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. In-process 3D assessment of micromoulding features

    NASA Astrophysics Data System (ADS)

    Whiteside, B. R.; Spares, R.; Coates, P. D.

    2006-04-01

    Micro injection moulding (micromoulding) technology has recently emerged as a viable manufacturing route for polymer, metal and ceramic components with micro-scale features and surface textures. With a cycle time for production of a single component of just a few seconds, the proces offers the capability for mass production of microscale devices at a low marginal cost. However, the extreme stresses, strain rates and temperature gradients characteristic of the process have the consequence that a slight fluctuation in material properties or moulding conditions can have a significant impact on the dimensional or structural properties of the resulting component and in-line process monitoring is highly desirable. This paper describes the development of an in-process, high speed 3-dimensional measurement system for evaluation of every component manufactured during the process. A high speed camera and microscope lens coupled with a linear stage are used to create a stack of images which are subsequently processed using extended depth of field techniques to form a virtual 3-dimensional contour of the component. This data can then be used to visually verify the quality of the moulding on-screen or standard machine vision algorithms can be employed to allow fully automated quality inspection and filtering of sub-standard products. Good results have been obtained for a range of materials and geometries and measurement accuracy has been verified through comparison with data obtained using a Wyko NT1100 white light interferometer.

  13. An improved image alignment procedure for high-resolution transmission electron microscopy.

    PubMed

    Lin, Fang; Liu, Yan; Zhong, Xiaoyan; Chen, Jianghua

    2010-06-01

    Image alignment is essential for image processing methods such as through-focus exit-wavefunction reconstruction and image averaging in high-resolution transmission electron microscopy. Relative image displacements exist in any experimentally recorded image series due to the specimen drifts and image shifts, hence image alignment for correcting the image displacements has to be done prior to any further image processing. The image displacement between two successive images is determined by the correlation function of the two relatively shifted images. Here it is shown that more accurate image alignment can be achieved by using an appropriate aperture to filter the high-frequency components of the images being aligned, especially for a crystalline specimen with little non-periodic information. For the image series of crystalline specimens with little amorphous, the radius of the filter aperture should be as small as possible, so long as it covers the innermost lattice reflections. Testing with an experimental through-focus series of Si[110] images, the accuracies of image alignment with different correlation functions are compared with respect to the error functions in through-focus exit-wavefunction reconstruction based on the maximum-likelihood method. Testing with image averaging over noisy experimental images from graphene and carbon-nanotube samples, clear and sharp crystal lattice fringes are recovered after applying optimal image alignment. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Automatic Image Processing Workflow for the Keck/NIRC2 Vortex Coronagraph

    NASA Astrophysics Data System (ADS)

    Xuan, Wenhao; Cook, Therese; Ngo, Henry; Zawol, Zoe; Ruane, Garreth; Mawet, Dimitri

    2018-01-01

    The Keck/NIRC2 camera, equipped with the vortex coronagraph, is an instrument targeted at the high contrast imaging of extrasolar planets. To uncover a faint planet signal from the overwhelming starlight, we utilize the Vortex Image Processing (VIP) library, which carries out principal component analysis to model and remove the stellar point spread function. To bridge the gap between data acquisition and data reduction, we implement a workflow that 1) downloads, sorts, and processes data with VIP, 2) stores the analysis products into a database, and 3) displays the reduced images, contrast curves, and auxiliary information on a web interface. Both angular differential imaging and reference star differential imaging are implemented in the analysis module. A real-time version of the workflow runs during observations, allowing observers to make educated decisions about time distribution on different targets, hence optimizing science yield. The post-night version performs a standardized reduction after the observation, building up a valuable database that not only helps uncover new discoveries, but also enables a statistical study of the instrument itself. We present the workflow, and an examination of the contrast performance of the NIRC2 vortex with respect to factors including target star properties and observing conditions.

  15. On techniques for angle compensation in nonideal iris recognition.

    PubMed

    Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A

    2007-10-01

    The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.

  16. Challenges of implementing digital technology in motion picture distribution and exhibition: testing and evaluation methodology

    NASA Astrophysics Data System (ADS)

    Swartz, Charles S.

    2003-05-01

    The process of distributing and exhibiting a motion picture has changed little since the Lumière brothers presented the first motion picture to an audience in 1895. While this analog photochemical process is capable of producing screen images of great beauty and expressive power, more often the consumer experience is diminished by third generation prints and by the wear and tear of the mechanical process. Furthermore, the film industry globally spends approximately $1B annually manufacturing and shipping prints. Alternatively, distributing digital files would theoretically yield great benefits in terms of image clarity and quality, lower cost, greater security, and more flexibility in the cinema (e.g., multiple language versions). In order to understand the components of the digital cinema chain and evaluate the proposed technical solutions, the Entertainment Technology Center at USC in 2000 established the Digital Cinema Laboratory as a critical viewing environment, with the highest quality film and digital projection equipment. The presentation describes the infrastructure of the Lab, test materials, and testing methodologies developed for compression evaluation, and lessons learned up to the present. In addition to compression, the Digital Cinema Laboratory plans to evaluate other components of the digital cinema process as well.

  17. Acousto-optic laser projection systems for displaying TV information

    NASA Astrophysics Data System (ADS)

    Gulyaev, Yu V.; Kazaryan, M. A.; Mokrushin, Yu M.; Shakin, O. V.

    2015-04-01

    This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulators and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation.

  18. Quality of care indicators in inflammatory bowel disease in a tertiary referral center with open access and objective assessment policies.

    PubMed

    Gonczi, Lorant; Kurti, Zsuzsanna; Golovics, Petra Anna; Lovasz, Barbara Dorottya; Menyhart, Orsolya; Seres, Anna; Sumegi, Liza Dalma; Gal, Alexander; Ilias, Akos; Janos, Papp; Gecse, Krisztina Barbara; Bessisow, Talat; Afif, Waqqas; Bitton, Alain; Vegh, Zsuzsanna; Lakatos, Peter Laszlo

    2018-01-01

    In the management of inflammatory bowel diseases, there is considerable variation in quality of care. The aim of this study was to evaluate structural, access/process components and outcome quality indicators in our tertiary referral IBD center. In the first phase, structural/process components were assessed, followed by the second phase of formal evaluation of access and management on a set of consecutive IBD patients with and without active disease (248CD/125UC patients, median age 35/39 years). Structural/process components of our IBD center met the international recommendations. At or around the time of diagnosis usual procedures were full colonoscopy in all patients, with ileocolonoscopy/gastroscopy/CT/MRI in 81.8/45.5/66.1/49.6% of CD patients. A total of 86.7% of CD patients had any follow-up imaging evaluation or endoscopy. The median waiting time for non-emergency endoscopy/CT/MRI was 16/14/22 days. During the observational period patients with flares (CD/UC:50.6/54.6%) were seen by specialist at the IBD clinic within a median of 1day with same day laboratory assessment, abdominal US, CT scan/surgical consult and change in therapy if needed. Surgery and hospitalization rates were 20.1/1.4% and 17.3/3.2% of CD/UC patients. Our results highlight that structural components and processes applied in our center are in line with international recommendations, including an open clinic concept and fast track access to specialist consultation, endoscopy and imaging. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  19. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.

  20. Design of Instrument Control Software for Solar Vector Magnetograph at Udaipur Solar Observatory

    NASA Astrophysics Data System (ADS)

    Gosain, Sanjay; Venkatakrishnan, P.; Venugopalan, K.

    2004-04-01

    A magnetograph is an instrument which makes measurement of solar magnetic field by measuring Zeeman induced polarization in solar spectral lines. In a typical filter based magnetograph there are three main modules namely, polarimeter, narrow-band spectrometer (filter), and imager(CCD camera). For a successful operation of magnetograph it is essential that these modules work in synchronization with each other. Here, we describe the design of instrument control system implemented for the Solar Vector Magnetograph under development at Udaipur Solar Observatory. The control software is written in Visual Basic and exploits the Component Object Model (COM) components for a fast and flexible application development. The user can interact with the instrument modules through a Graphical User Interface (GUI) and can program the sequence of magnetograph operations. The integration of Interactive Data Language (IDL) ActiveX components in the interface provides a powerful tool for online visualization, analysis and processing of images.

  1. Department of Defense picture archiving and communication system acceptance testing: results and identification of problem components.

    PubMed

    Allison, Scott A; Sweet, Clifford F; Beall, Douglas P; Lewis, Thomas E; Monroe, Thomas

    2005-09-01

    The PACS implementation process is complicated requiring a tremendous amount of time, resources, and planning. The Department of Defense (DOD) has significant experience in developing and refining PACS acceptance testing (AT) protocols that assure contract compliance, clinical safety, and functionality. The DOD's AT experience under the initial Medical Diagnostic Imaging Support System contract led to the current Digital Imaging Network-Picture Archiving and Communications Systems (DIN-PACS) contract AT protocol. To identify the most common system and component deficiencies under the current DIN-PACS AT protocol, 14 tri-service sites were evaluated during 1998-2000. Sixteen system deficiency citations with 154 separate types of limitations were noted with problems involving the workstation, interfaces, and the Radiology Information System comprising more than 50% of the citations. Larger PACS deployments were associated with a higher number of deficiencies. The most commonly cited systems deficiencies were among the most expensive components of the PACS.

  2. 76 FR 59737 - In the Matter of Certain Digital Photo Frames and Image Display Devices and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ... Frames and Image Display Devices and Components Thereof; Notice of Institution of Investigation... United States after importation of certain digital photo frames and image display devices and components... certain digital photo frames and image display devices and components thereof that infringe one or more of...

  3. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    PubMed

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  4. Effective use of principal component analysis with high resolution remote sensing data to delineate hydrothermal alteration and carbonate rocks

    NASA Technical Reports Server (NTRS)

    Feldman, Sandra C.

    1987-01-01

    Methods of applying principal component (PC) analysis to high resolution remote sensing imagery were examined. Using Airborne Imaging Spectrometer (AIS) data, PC analysis was found to be useful for removing the effects of albedo and noise and for isolating the significant information on argillic alteration, zeolite, and carbonate minerals. An effective technique for using PC analysis using an input the first 16 AIS bands, 7 intermediate bands, and the last 16 AIS bands from the 32 flat field corrected bands between 2048 and 2337 nm. Most of the significant mineralogical information resided in the second PC. PC color composites and density sliced images provided a good mineralogical separation when applied to a AIS data set. Although computer intensive, the advantage of PC analysis is that it employs algorithms which already exist on most image processing systems.

  5. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  6. Magnetoencephalographic imaging of deep corticostriatal network activity during a rewards paradigm.

    PubMed

    Kanal, Eliezer Y; Sun, Mingui; Ozkurt, Tolga E; Jia, Wenyan; Sclabassi, Robert

    2009-01-01

    The human rewards network is a complex system spanning both cortical and subcortical regions. While much is known about the functions of the various components of the network, research on the behavior of the network as a whole has been stymied due to an inability to detect signals at a high enough temporal resolution from both superficial and deep network components simultaneously. In this paper, we describe the application of magnetoencephalographic imaging (MEG) combined with advanced signal processing techniques to this problem. Using data collected while subjects performed a rewards-related gambling paradigm demonstrated to activate the rewards network, we were able to identify neural signals which correspond to deep network activity. We also show that this signal was not observable prior to filtration. These results suggest that MEG imaging may be a viable tool for the detection of deep neural activity.

  7. Image processing analysis of traditional Gestalt vision experiments

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  8. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space. This three-term decomposition brings a detectability boost compared to the full-frame standard PCA approach, especially in the small inner working angle region where complex speckle noise prevents PCA from discerning true companions from noise.

  9. Innovations in Small-Animal PET/MR Imaging Instrumentation.

    PubMed

    Tsoumpas, Charalampos; Visvikis, Dimitris; Loudos, George

    2016-04-01

    Multimodal imaging has led to a more detailed exploration of different physiologic processes with integrated PET/MR imaging being the most recent entry. Although the clinical need is still questioned, it is well recognized that it represents one of the most active and promising fields of medical imaging research in terms of software and hardware. The hardware developments have moved from small detector components to high-performance PET inserts and new concepts in full systems. Conversely, the software focuses on the efficient performance of necessary corrections without the use of CT data. The most recent developments in both directions are reviewed. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery.

    PubMed

    Hu, Weiling; Zhang, Xu; Wang, Bin; Liu, Jiquan; Duan, Huilong; Dai, Ning; Si, Jianmin

    2016-01-01

    Image registration is a key component of computer assistance in image guided surgery, and it is a challenging topic in endoscopic environments. In this study, we present a method for image registration named Homographic Patch Feature Transform (HPFT) to match gastroscopic images. HPFT can be used for tracking lesions and augmenting reality applications during gastroscopy. Furthermore, an overall evaluation scheme is proposed to validate the precision, robustness and uniformity of the registration results, which provides a standard for rejection of false matching pairs from corresponding results. Finally, HPFT is applied for processing in vivo gastroscopic data. The experimental results show that HPFT has stable performance in gastroscopic applications.

  11. An automated dose tracking system for adaptive radiation therapy.

    PubMed

    Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J

    2018-02-01

    The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient dose tracking system for ART in the clinical setting is presented. The software and automated processes were rigorously evaluated and validated using patient image datasets. Automation of the various procedures has improved efficiency significantly, allowing for the routine clinical application of ART for improving radiation therapy effectiveness. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  13. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  14. Image formation of thick three-dimensional objects in differential-interference-contrast microscopy.

    PubMed

    Trattner, Sigal; Kashdan, Eugene; Feigin, Micha; Sochen, Nir

    2014-05-01

    The differential-interference-contrast (DIC) microscope is of widespread use in life sciences as it enables noninvasive visualization of transparent objects. The goal of this work is to model the image formation process of thick three-dimensional objects in DIC microscopy. The model is based on the principles of electromagnetic wave propagation and scattering. It simulates light propagation through the components of the DIC microscope to the image plane using a combined geometrical and physical optics approach and replicates the DIC image of the illuminated object. The model is evaluated by comparing simulated images of three-dimensional spherical objects with the recorded images of polystyrene microspheres. Our computer simulations confirm that the model captures the major DIC image characteristics of the simulated object, and it is sensitive to the defocusing effects.

  15. V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.

    2011-09-01

    In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.

  16. Manufacture of micro fluidic devices by laser welding using thermal transfer printing techniques

    NASA Astrophysics Data System (ADS)

    Klein, R.; Klein, K. F.; Tobisch, T.; Thoelken, D.; Belz, M.

    2016-03-01

    Micro-fluidic devices are widely used today in the areas of medical diagnostics and drug research, as well as for applications within the process, electronics and chemical industry. Microliters of fluids or single cell to cell interactions can be conveniently analyzed with such devices using fluorescence imaging, phase contrast microscopy or spectroscopic techniques. Typical micro-fluidic devices consist of a thermoplastic base component with chambers and channels covered by a hermetic fluid and gas tight sealed lid component. Both components are usually from the same or similar thermoplastic material. Different mechanical, adhesive or thermal joining processes can be used to assemble base component and lid. Today, laser beam welding shows the potential to become a novel manufacturing opportunity for midsize and large scale production of micro-fluidic devices resulting in excellent processing quality by localized heat input and low thermal stress to the device during processing. For laser welding, optical absorption of the resin and laser wavelength has to be matched for proper joining. This paper will focus on a new approach to prepare micro-fluidic channels in such devices using a thermal transfer printing process, where an optical absorbing layer absorbs the laser energy. Advantages of this process will be discussed in combination with laser welding of optical transparent micro-fluidic devices.

  17. Elimination of autofluorescence background from fluorescence tissue images by use of time-gated detection and the AzaDiOxaTriAngulenium (ADOTA) fluorophore.

    PubMed

    Rich, Ryan M; Stankowska, Dorota L; Maliwal, Badri P; Sørensen, Thomas Just; Laursen, Bo W; Krishnamoorthy, Raghu R; Gryczynski, Zygmunt; Borejdo, Julian; Gryczynski, Ignacy; Fudala, Rafal

    2013-02-01

    Sample autofluorescence (fluorescence of inherent components of tissue and fixative-induced fluorescence) is a significant problem in direct imaging of molecular processes in biological samples. A large variety of naturally occurring fluorescent components in tissue results in broad emission that overlaps the emission of typical fluorescent dyes used for tissue labeling. In addition, autofluorescence is characterized by complex fluorescence intensity decay composed of multiple components whose lifetimes range from sub-nanoseconds to a few nanoseconds. For these reasons, the real fluorescence signal of the probe is difficult to separate from the unwanted autofluorescence. Here we present a method for reducing the autofluorescence problem by utilizing an azadioxatriangulenium (ADOTA) dye with a fluorescence lifetime of approximately 15 ns, much longer than those of most of the components of autofluorescence. A probe with such a long lifetime enables us to use time-gated intensity imaging to separate the signal of the targeting dye from the autofluorescence. We have shown experimentally that by discarding photons detected within the first 20 ns of the excitation pulse, the signal-to-background ratio is improved fivefold. This time-gating eliminates over 96 % of autofluorescence. Analysis using a variable time-gate may enable quantitative determination of the bound probe without the contributions from the background.

  18. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis

    NASA Astrophysics Data System (ADS)

    Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.

    2016-07-01

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.

  19. Principal Component Analysis in the Spectral Analysis of the Dynamic Laser Speckle Patterns

    NASA Astrophysics Data System (ADS)

    Ribeiro, K. M.; Braga, R. A., Jr.; Horgan, G. W.; Ferreira, D. D.; Safadi, T.

    2014-02-01

    Dynamic laser speckle is a phenomenon that interprets an optical patterns formed by illuminating a surface under changes with coherent light. Therefore, the dynamic change of the speckle patterns caused by biological material is known as biospeckle. Usually, these patterns of optical interference evolving in time are analyzed by graphical or numerical methods, and the analysis in frequency domain has also been an option, however involving large computational requirements which demands new approaches to filter the images in time. Principal component analysis (PCA) works with the statistical decorrelation of data and it can be used as a data filtering. In this context, the present work evaluated the PCA technique to filter in time the data from the biospeckle images aiming the reduction of time computer consuming and improving the robustness of the filtering. It was used 64 images of biospeckle in time observed in a maize seed. The images were arranged in a data matrix and statistically uncorrelated by PCA technique, and the reconstructed signals were analyzed using the routine graphical and numerical methods to analyze the biospeckle. Results showed the potential of the PCA tool in filtering the dynamic laser speckle data, with the definition of markers of principal components related to the biological phenomena and with the advantage of fast computational processing.

  20. The mitigating effect of repeated memory reactivations on forgetting

    NASA Astrophysics Data System (ADS)

    MacLeod, Sydney; Reynolds, Michael G.; Lehmann, Hugo

    2018-12-01

    Memory reactivation is a process whereby cueing or recalling a long-term memory makes it enter a new active and labile state. Substantial evidence suggests that during this state the memory can be updated (e.g., adding information) and can become more vulnerable to disruption (e.g., brain insult). Memory reactivations can also prevent memory decay or forgetting. However, it is unclear whether cueing recall of a feature or component of the memory can benefit retention similarly to promoting recall of the entire memory. We examined this possibility by having participants view a series of neutral images and then randomly assigning them to one of four reactivation groups: control (no reactivation), distractor (reactivation of experimental procedures), component (image category reactivation), and descriptive (effortful description of the images). The experiment also included three retention intervals: 1 h, 9 days, and 28 days. Importantly, the participants received three reactivations equally spaced within their respective retention interval. At the end of the interval, all the participants were given an in-lab free-recall test in which they were asked to write down each image they remembered with as many details as possible. The data revealed that both the participants in the descriptive reactivation and component reactivation groups remembered significantly more than the participants in the control groups, with the effect being most pronounced in the 28-day retention interval condition. These findings suggest that memory reactivation, even component reactivation of a memory, makes memories more resistant to decay.

  1. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Distributed and Dynamic Neural Encoding of Multiple Motion Directions of Transparently Moving Stimuli in Cortical Area MT

    PubMed Central

    Xiao, Jianbo

    2015-01-01

    Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869

  3. Review of image processing fundamentals

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1985-01-01

    Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.

  4. A network-based training environment: a medical image processing paradigm.

    PubMed

    Costaridou, L; Panayiotakis, G; Sakellaropoulos, P; Cavouras, D; Dimopoulos, J

    1998-01-01

    The capability of interactive multimedia and Internet technologies is investigated with respect to the implementation of a distance learning environment. The system is built according to a client-server architecture, based on the Internet infrastructure, composed of server nodes conceptually modelled as WWW sites. Sites are implemented by customization of available components. The environment integrates network-delivered interactive multimedia courses, network-based tutoring, SIG support, information databases of professional interest, as well as course and tutoring management. This capability has been demonstrated by means of an implemented system, validated with digital image processing content, specifically image enhancement. Image enhancement methods are theoretically described and applied to mammograms. Emphasis is given to the interactive presentation of the effects of algorithm parameters on images. The system end-user access depends on available bandwidth, so high-speed access can be achieved via LAN or local ISDN connections. Network based training offers new means of improved access and sharing of learning resources and expertise, as promising supplements in training.

  5. Imaging articular cartilage using second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Mansfield, Jessica C.; Winlove, C. Peter; Knapp, Karen; Matcher, Stephen J.

    2006-02-01

    Sub cellular resolution images of equine articular cartilage have been obtained using both second harmonic generation microscopy (SHGM) and two-photon fluorescence microscopy (TPFM). The SHGM images clearly map the distribution of the collagen II fibers within the extracellular matrix while the TPFM images show the distribution of endogenous two-photon fluorophores in both the cells and the extracellular matrix, highlighting especially the pericellular matrix and bright 2-3μm diameter features within the cells. To investigate the source of TPF in the extracellular matrix experiments have been carried out to see if it may originate from the proteoglycans. Pure solutions of the following proteoglycans hyaluronan, chondroitin sulfate and aggrecan have been imaged, only the aggrecan produced any TPF and here the intensity was not great enough to account for the TPF in the extracellular matrix. Also cartilage samples were subjected to a process to remove proteoglycans and cellular components. After this process the TPF from the samples had decreased by a factor of two, with respect to the SHG intensity.

  6. Application of the EM algorithm to radiographic images.

    PubMed

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  7. Time-alternating method based on single-sideband holography with half-zone-plate processing for the enlargement of viewing zones.

    PubMed

    Mishina, T; Okano, F; Yuyama, I

    1999-06-10

    The single-sideband method of holography, as is well known, cuts off beams that come from conjugate images for holograms produced in the Fraunhofer region and from objects with no phase components. The single-sideband method with half-zone-plate processing is also effective in the Fresnel region for beams from an object that has phase components. However, this method restricts the viewing zone to a narrow range. We propose a method to improve this restriction by time-alternating switching of hologram patterns and a spatial filter set on the focal plane of a reconstruction lens.

  8. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  9. Investigation of Space Interferometer Control Using Imaging Sensor Output Feedback

    NASA Technical Reports Server (NTRS)

    Leitner, Jesse A.; Cheng, Victor H. L.

    2003-01-01

    Numerous space interferometry missions are planned for the next decade to verify different enabling technologies towards very-long-baseline interferometry to achieve high-resolution imaging and high-precision measurements. These objectives will require coordinated formations of spacecraft separately carrying optical elements comprising the interferometer. High-precision sensing and control of the spacecraft and the interferometer-component payloads are necessary to deliver sub-wavelength accuracy to achieve the scientific objectives. For these missions, the primary scientific product of interferometer measurements may be the only source of data available at the precision required to maintain the spacecraft and interferometer-component formation. A concept is studied for detecting the interferometer's optical configuration errors based on information extracted from the interferometer sensor output. It enables precision control of the optical components, and, in cases of space interferometers requiring formation flight of spacecraft that comprise the elements of a distributed instrument, it enables the control of the formation-flying vehicles because independent navigation or ranging sensors cannot deliver the high-precision metrology over the entire required geometry. Since the concept can act on the quality of the interferometer output directly, it can detect errors outside the capability of traditional metrology instruments, and provide the means needed to augment the traditional instrumentation to enable enhanced performance. Specific analyses performed in this study include the application of signal-processing and image-processing techniques to solve the problems of interferometer aperture baseline control, interferometer pointing, and orientation of multiple interferometer aperture pairs.

  10. Early forest fire detection using principal component analysis of infrared video

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Radjabi, Ryan; Jacobs, John T.

    2011-09-01

    A land-based early forest fire detection scheme which exploits the infrared (IR) temporal signature of fire plume is described. Unlike common land-based and/or satellite-based techniques which rely on measurement and discrimination of fire plume directly from its infrared and/or visible reflectance imagery, this scheme is based on exploitation of fire plume temporal signature, i.e., temperature fluctuations over the observation period. The method is simple and relatively inexpensive to implement. The false alarm rate is expected to be lower that of the existing methods. Land-based infrared (IR) cameras are installed in a step-stare-mode configuration in potential fire-prone areas. The sequence of IR video frames from each camera is digitally processed to determine if there is a fire within camera's field of view (FOV). The process involves applying a principal component transformation (PCT) to each nonoverlapping sequence of video frames from the camera to produce a corresponding sequence of temporally-uncorrelated principal component (PC) images. Since pixels that form a fire plume exhibit statistically similar temporal variation (i.e., have a unique temporal signature), PCT conveniently renders the footprint/trace of the fire plume in low-order PC images. The PC image which best reveals the trace of the fire plume is then selected and spatially filtered via simple threshold and median filter operations to remove the background clutter, such as traces of moving tree branches due to wind.

  11. Architecture and data processing alternatives for Tse computer. Volume 1: Tse logic design concepts and the development of image processing machine architectures

    NASA Technical Reports Server (NTRS)

    Rickard, D. A.; Bodenheimer, R. E.

    1976-01-01

    Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.

  12. Implementation Analysis of Cutting Tool Carbide with Cast Iron Material S45 C on Universal Lathe

    NASA Astrophysics Data System (ADS)

    Junaidi; hestukoro, Soni; yanie, Ahmad; Jumadi; Eddy

    2017-12-01

    Cutting tool is the tools lathe. Cutting process tool CARBIDE with Cast Iron Material Universal Lathe which is commonly found at Analysiscutting Process by some aspects numely Cutting force, Cutting Speed, Cutting Power, Cutting Indication Power, Temperature Zone 1 and Temperatur Zone 2. Purpose of this Study was to determine how big the cutting Speed, Cutting Power, electromotor Power,Temperatur Zone 1 and Temperatur Zone 2 that drives the chisel cutting CARBIDE in the Process of tur ning Cast Iron Material. Cutting force obtained from image analysis relationship between the recommended Component Cuting Force with plane of the cut and Cutting Speed obtained from image analysis of relationships between the recommended Cutting Speed Feed rate.

  13. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  14. Digital image processing based identification of nodes and internodes of chopped biomass stems

    USDA-ARS?s Scientific Manuscript database

    Chemical composition of biomass feedstock is an important parameter for optimizing the yield and economics of various bioconversion pathways. Although understandably, the chemical composition of biomass varies among species, varieties, and plant components, there is distinct variation even among ste...

  15. A Biological-Plausable Architecture for Shape Recognition

    DTIC Science & Technology

    2006-06-30

    between curves. Information Processing Letters, 64, 1997. [4] Irving Biederman . Recognition-by-components: A theory of human image understanding...Psychological Review, 94(2):115–147, 1987 . 43 [5] C. Cadieu, M. Kouh, M. Riesenhuber, and T. Poggio. Shape representation in v4: Investi- gating position

  16. Repeatability and Reproducibility of Virtual Subjective Refraction.

    PubMed

    Perches, Sara; Collados, M Victoria; Ares, Jorge

    2016-10-01

    To establish the repeatability and reproducibility of a virtual refraction process using simulated retinal images. With simulation software, aberrated images corresponding with each step of the refraction process were calculated following the typical protocol of conventional subjective refraction. Fifty external examiners judged simulated retinal images until the best sphero-cylindrical refraction and the best visual acuity were achieved starting from the aberrometry data of three patients. Data analyses were performed to assess repeatability and reproducibility of the virtual refraction as a function of pupil size and aberrometric profile of different patients. SD values achieved in three components of refraction (M, J0, and J45) are lower than 0.25D in repeatability analysis. Regarding reproducibility, we found SD values lower than 0.25D in the most cases. When the results of virtual refraction with different pupil diameters (4 and 6 mm) were compared, the mean of differences (MoD) obtained were not clinically significant (less than 0.25D). Only one of the aberrometry profiles with high uncorrected astigmatism shows poor results for the M component in reproducibility and pupil size dependence analysis. In all cases, vision achieved was better than 0 logMAR. A comparison between the compensation obtained with virtual and conventional subjective refraction was made as an example of this application, showing good quality retinal images in both processes. The present study shows that virtual refraction has similar levels of precision as conventional subjective refraction. Moreover, virtual refraction has also shown that when high low order astigmatism is present, the refraction result is less precise and highly dependent on pupil size.

  17. Principal components technique analysis for vegetation and land use discrimination. [Brazilian cerrados

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Formaggio, A. R.; Dossantos, J. R.; Dias, L. A. V.

    1984-01-01

    Automatic pre-processing technique called Principal Components (PRINCO) in analyzing LANDSAT digitized data, for land use and vegetation cover, on the Brazilian cerrados was evaluated. The chosen pilot area, 223/67 of MSS/LANDSAT 3, was classified on a GE Image-100 System, through a maximum-likehood algorithm (MAXVER). The same procedure was applied to the PRINCO treated image. PRINCO consists of a linear transformation performed on the original bands, in order to eliminate the information redundancy of the LANDSAT channels. After PRINCO only two channels were used thus reducing computer effort. The original channels and the PRINCO channels grey levels for the five identified classes (grassland, "cerrado", burned areas, anthropic areas, and gallery forest) were obtained through the MAXVER algorithm. This algorithm also presented the average performance for both cases. In order to evaluate the results, the Jeffreys-Matusita distance (JM-distance) between classes was computed. The classification matrix, obtained through MAXVER, after a PRINCO pre-processing, showed approximately the same average performance in the classes separability.

  18. Military microwaves '84; Proceedings of the Conference, London, England, October 24-26, 1984

    NASA Astrophysics Data System (ADS)

    The present conference on microwave frequency electronic warfare and military sensor equipment developments consider radar warning receivers, optical frequency spread spectrum systems, mobile digital communications troposcatter effects, wideband bulk encryption, long range air defense radars (such as the AR320, W-2000 and Martello), multistatic radars, and multimode airborne and interceptor radars. IR system and subsystem component topics encompass thermal imaging and active IR countermeasures, class 1 modules, and diamond coatings, while additional radar-related topics include radar clutter in airborne maritime reconnaissance systems, microstrip antennas with dual polarization capability, the synthesis of shaped beam antenna patterns, planar phased arrays, radar signal processing, radar cross section measurement techniques, and radar imaging and pattern analysis. Attention is also given to optical control and signal processing, mm-wave control technology and EW systems, W-band operations, planar mm-wave arrays, mm-wave monolithic solid state components, mm-wave sensor technology, GaAs monolithic ICs, and dielectric resonator and wideband tunable oscillators.

  19. The Airborne Visible / Infrared Imaging Spectrometer AVIS: Design, Characterization and Calibration.

    PubMed

    Oppelt, Natascha; Mauser, Wolfram

    2007-09-14

    The Airborne Visible / Infrared imaging Spectrometer AVIS is a hyperspectralimager designed for environmental monitoring purposes. The sensor, which wasconstructed entirely from commercially available components, has been successfullydeployed during several experiments between 1999 and 2007. We describe the instrumentdesign and present the results of laboratory characterization and calibration of the system'ssecond generation, AVIS-2, which is currently being operated. The processing of the datais described and examples of remote sensing reflectance data are presented.

  20. Non-chemically amplified 193-nm top surface imaging photoresist development: polymer substituent and polydispersity effects

    NASA Astrophysics Data System (ADS)

    Kim, Myoung-Soo; Kim, Hyoung-Gi; Kim, Hyeong-Soo; Baik, Ki-Ho; Johnson, Donald W.; Cernigliaro, George J.; Minsek, David W.

    1999-06-01

    Thin film imaging processes such as top surface imaging (TSI) are candidates for sub-150 nm lithography using 193 nm lithography. Single component, non-chemically amplified, positive tone TSI photoresists based on phenolic polymers demonstrate good post-etch contrast, resolution, and minimal line edge roughness, in addition to being the most straightforward thin film imaging approach. In this approach, ArF laser exposure results directly in radiation- induced crosslinking of the phenolic polymer, followed by formation of a thin etch mask at the surface of the un- exposed regions by vapor-phase silylation, followed by reactive ion etching of the non-silylated regions. However, single component resists based on poly(para-hydroxystryene) (PHS), such as MicroChem's Nano MX-P7, suffer from slow photospeed as well as low silylation contrast which can cause reproducibility and line-edge-roughness problems. We report that selected aromatic substitution of the poly(para- hydroxystryene) polymer can increase the photospeed by up to a factor of four relative to un-substituted PHS. In this paper we report the synthesis and lithographic evaluations of four experimental TSI photoresists. MX-EX-1, MX-EX-2, MX- EX-3 and MX-EX-4 are non-chemically amplified resists based on aromatic substitutions of chloro- and hydroxymethyl- groups and PHS. We report optimized lithographic processing conditions, line edge roughness, silylation contrast, and compare the results to the parent PHS photoresist.

  1. Pleiades image quality: from users' needs to products definition

    NASA Astrophysics Data System (ADS)

    Kubik, Philippe; Pascal, Véronique; Latry, Christophe; Baillarin, Simon

    2005-10-01

    Pleiades is the highest resolution civilian earth observing system ever developed in Europe. This imagery programme is conducted by the French National Space Agency, CNES. It will operate in 2008-2009 two agile satellites designed to provide optical images to civilian and defence users. Images will be simultaneously acquired in Panchromatic (PA) and multispectral (XS) mode, which allows, in Nadir acquisition condition, to deliver 20 km wide, false or natural colored scenes with a 70 cm ground sampling distance after PA+XS fusion. Imaging capabilities have been highly optimized in order to acquire along-track mosaics, stereo pairs and triplets, and multi-targets. To fulfill the operational requirements and ensure quick access to information, ground processing has to automatically perform the radiometrical and geometrical corrections. Since ground processing capabilities have been taken into account very early in the programme development, it has been possible to relax some costly on-board components requirements, in order to achieve a cost effective on-board/ground compromise. Starting from an overview of the system characteristics, this paper deals with the image products definition (raw level, perfect sensor, orthoimage and along-track orthomosaics), and the main processing steps. It shows how each system performance is a result of the satellite performance followed by an appropriate ground processing. Finally, it focuses on the radiometrical performances of final products which are intimately linked to the following processing steps : radiometrical corrections, PA restoration, image resampling and PAN-sharpening.

  2. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  3. Simultaneous cathodoluminescence and electron microscopy cytometry of cellular vesicles labeled with fluorescent nanodiamonds

    NASA Astrophysics Data System (ADS)

    Nagarajan, Sounderya; Pioche-Durieu, Catherine; Tizei, Luiz H. G.; Fang, Chia-Yi; Bertrand, Jean-Rémi; Le Cam, Eric; Chang, Huan-Cheng; Treussart, François; Kociak, Mathieu

    2016-06-01

    Light and Transmission Electron Microscopies (LM and TEM) hold potential in bioimaging owing to the advantages of fast imaging of multiple cells with LM and ultrastructure resolution offered by TEM. Integrated or correlated LM and TEM are the current approaches to combine the advantages of both techniques. Here we propose an alternative in which the electron beam of a scanning TEM (STEM) is used to excite concomitantly the luminescence of nanoparticle labels (a process known as cathodoluminescence, CL), and image the cell ultrastructure. This CL-STEM imaging allows obtaining luminescence spectra and imaging ultrastructure simultaneously. We present a proof of principle experiment, showing the potential of this technique in image cytometry of cell vesicular components. To label the vesicles we used fluorescent diamond nanocrystals (nanodiamonds, NDs) of size ~150 nm coated with different cationic polymers, known to trigger different internalization pathways. Each polymer was associated with a type of ND with a different emission spectrum. With CL-STEM, for each individual vesicle, we were able to measure (i) their size with nanometric resolution, (ii) their content in different ND labels, and realize intracellular component cytometry. In contrast to the recently reported organelle flow cytometry technique that requires cell sonication, CL-STEM-based image cytometry preserves the cell integrity and provides a much higher resolution in size. Although this novel approach is still limited by a low throughput, the automatization of data acquisition and image analysis, combined with improved intracellular targeting, should facilitate applications in cell biology at the subcellular level.Light and Transmission Electron Microscopies (LM and TEM) hold potential in bioimaging owing to the advantages of fast imaging of multiple cells with LM and ultrastructure resolution offered by TEM. Integrated or correlated LM and TEM are the current approaches to combine the advantages of both techniques. Here we propose an alternative in which the electron beam of a scanning TEM (STEM) is used to excite concomitantly the luminescence of nanoparticle labels (a process known as cathodoluminescence, CL), and image the cell ultrastructure. This CL-STEM imaging allows obtaining luminescence spectra and imaging ultrastructure simultaneously. We present a proof of principle experiment, showing the potential of this technique in image cytometry of cell vesicular components. To label the vesicles we used fluorescent diamond nanocrystals (nanodiamonds, NDs) of size ~150 nm coated with different cationic polymers, known to trigger different internalization pathways. Each polymer was associated with a type of ND with a different emission spectrum. With CL-STEM, for each individual vesicle, we were able to measure (i) their size with nanometric resolution, (ii) their content in different ND labels, and realize intracellular component cytometry. In contrast to the recently reported organelle flow cytometry technique that requires cell sonication, CL-STEM-based image cytometry preserves the cell integrity and provides a much higher resolution in size. Although this novel approach is still limited by a low throughput, the automatization of data acquisition and image analysis, combined with improved intracellular targeting, should facilitate applications in cell biology at the subcellular level. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr01908k

  4. The interactive astronomical data analysis facility - image enhancement techniques to Comet Halley

    NASA Astrophysics Data System (ADS)

    Klinglesmith, D. A.

    1981-10-01

    PDP 11/40 computer is at the heart of a general purpose interactive data analysis facility designed to permit easy access to data in both visual imagery and graphic representations. The major components consist of: the 11/40 CPU and 256 K bytes of 16-bit memory; two TU10 tape drives; 20 million bytes of disk storage; three user terminals; and the COMTAL image processing display system. The application of image enhancement techniques to two sequences of photographs of Comet Halley taken in Egypt in 1910 provides evidence for eruptions from the comet's nucleus.

  5. Puncture-proof picture archiving and communication system.

    PubMed

    Willis, C E; McCluggage, C W; Orand, M R; Parker, B R

    2001-06-01

    As we become increasingly dependent on our picture archiving and communications system (PACS) for the clinical practice of medicine, the demand for improved reliability becomes urgent. Borrowing principles from the discipline of Reliability Engineering, we have identified components of our system that constitute single points of failure and have endeavored to eliminate these through redundant components and manual work-around procedures. To assess the adequacy of our preparations, we have identified a set of plausible events that could interfere with the function of one or more of our PACS components. These events could be as simple as the loss of the network connection to a single component or as broad as the loss of our central data center. We have identified the need to continue to operate during adverse conditions, as well as the requirement to recover rapidly from major disruptions in service. This assessment led us to modify the physical locations of central PACS components within our physical plant. We are also taking advantage of actual disruptive events coincident with a major expansion of our facility to test our recovery procedures. Based on our recognition of the vital nature of our electronic images for patient care, we are now recording electronic images in two copies on disparate media. The image database is critical to both continued operations and recovery. Restoration of the database from periodic tape backups with a 24-hour cycle time may not support our clinical scenario: acquisition modalities have a limited local storage capacity, some of which will not contain the daily workload. Restoration of the database from the archived media is an exceedingly slow process, that will likely not meet our requirement to restore clinical operations without significant delay. Our PACS vendor is working on concurrent image databases that would be capable of nearly immediate switchover and recovery.

  6. Object extraction method for image synthesis

    NASA Astrophysics Data System (ADS)

    Inoue, Seiki

    1991-11-01

    The extraction of component objects from images is fundamentally important for image synthesis. In TV program production, one useful method is the Video-Matte technique for specifying the necessary boundary of an object. This, however, involves some manually intricate and tedious processes. A new method proposed in this paper can reduce the needed level of operator skill and simplify object extraction. The object is automatically extracted by just a simple drawing of a thick boundary line. The basic principle involves a thinning of the thick boundary line binary image using the edge intensity of the original image. This method has many practical advantages, including the simplicity of specifying an object, the high accuracy of thinned-out boundary line, its ease of application to moving images, and the lack of any need for adjustment.

  7. Acousto-optic laser projection systems for displaying TV information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulyaev, Yu V; Kazaryan, M A; Mokrushin, Yu M

    2015-04-30

    This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulatorsmore » and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation. (review)« less

  8. Non-destructive evaluation of chlorophyll content in quinoa and amaranth leaves by simple and multiple regression analysis of RGB image components.

    PubMed

    Riccardi, M; Mele, G; Pulvento, C; Lavini, A; d'Andria, R; Jacobsen, S-E

    2014-06-01

    Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expensive, laborious, and time consuming. Over the years alternative methods, rapid and non-destructive, have been explored. The aim of this work was to evaluate the applicability of a fast and non-invasive field method for estimation of chlorophyll content in quinoa and amaranth leaves based on RGB components analysis of digital images acquired with a standard SLR camera. Digital images of leaves from different genotypes of quinoa and amaranth were acquired directly in the field. Mean values of each RGB component were evaluated via image analysis software and correlated to leaf chlorophyll provided by standard laboratory procedure. Single and multiple regression models using RGB color components as independent variables have been tested and validated. The performance of the proposed method was compared to that of the widely used non-destructive SPAD method. Sensitivity of the best regression models for different genotypes of quinoa and amaranth was also checked. Color data acquisition of the leaves in the field with a digital camera was quick, more effective, and lower cost than SPAD. The proposed RGB models provided better correlation (highest R (2)) and prediction (lowest RMSEP) of the true value of foliar chlorophyll content and had a lower amount of noise in the whole range of chlorophyll studied compared with SPAD and other leaf image processing based models when applied to quinoa and amaranth.

  9. Process simulation in digital camera system

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  10. Development of fluorescence based handheld imaging devices for food safety inspection

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Kim, Moon S.; Chao, Kuanglin; Lefcourt, Alan M.; Chan, Diane E.

    2013-05-01

    For sanitation inspection in food processing environment, fluorescence imaging can be a very useful method because many organic materials reveal unique fluorescence emissions when excited by UV or violet radiation. Although some fluorescence-based automated inspection instrumentation has been developed for food products, there remains a need for devices that can assist on-site inspectors performing visual sanitation inspection of the surfaces of food processing/handling equipment. This paper reports the development of an inexpensive handheld imaging device designed to visualize fluorescence emissions and intended to help detect the presence of fecal contaminants, organic residues, and bacterial biofilms at multispectral fluorescence emission bands. The device consists of a miniature camera, multispectral (interference) filters, and high power LED illumination. With WiFi communication, live inspection images from the device can be displayed on smartphone or tablet devices. This imaging device could be a useful tool for assessing the effectiveness of sanitation procedures and for helping processors to minimize food safety risks or determine potential problem areas. This paper presents the design and development including evaluation and optimization of the hardware components of the imaging devices.

  11. Removing Beam Current Artifacts in Helium Ion Microscopy: A Comparison of Image Processing Techniques.

    PubMed

    Barlow, Anders J; Portoles, Jose F; Sano, Naoko; Cumpson, Peter J

    2016-10-01

    The development of the helium ion microscope (HIM) enables the imaging of both hard, inorganic materials and soft, organic or biological materials. Advantages include outstanding topographical contrast, superior resolution down to <0.5 nm at high magnification, high depth of field, and no need for conductive coatings. The instrument relies on helium atom adsorption and ionization at a cryogenically cooled tip that is atomically sharp. Under ideal conditions this arrangement provides a beam of ions that is stable for days to weeks, with beam currents in the order of picoamperes. Over time, however, this stability is lost as gaseous contamination builds up in the source region, leading to adsorbed atoms of species other than helium, which ultimately results in beam current fluctuations. This manifests itself as horizontal stripe artifacts in HIM images. We investigate post-processing methods to remove these artifacts from HIM images, such as median filtering, Gaussian blurring, fast Fourier transforms, and principal component analysis. We arrive at a simple method for completely removing beam current fluctuation effects from HIM images while maintaining the full integrity of the information within the image.

  12. Laser scanning stereomicroscopy for fast volumetric imaging with two-photon excitation and scanned Bessel beams

    NASA Astrophysics Data System (ADS)

    Yang, Yanlong; Zhou, Xing; Li, Runze; Van Horn, Mark; Peng, Tong; Lei, Ming; Wu, Di; Chen, Xun; Yao, Baoli; Ye, Tong

    2015-03-01

    Bessel beams have been used in many applications due to their unique optical properties of maintaining their intensity profiles unchanged during propagation. In imaging applications, Bessel beams have been successfully used to provide extended focuses for volumetric imaging and uniformed illumination plane in light-sheet microscopy. Coupled with two-photon excitation, Bessel beams have been successfully used in realizing fluorescence projected volumetric imaging. We demonstrated previously a stereoscopic solution-two-photon fluorescence stereomicroscopy (TPFSM)-for recovering the depth information in volumetric imaging with Bessel beams. In TPFSM, tilted Bessel beams were used to generate stereoscopic images on a laser scanning two-photon fluorescence microscope; upon post image processing we could successfully provide 3D perception of acquired volume images by wearing anaglyph 3D glasses. However, tilted Bessel beams were generated by shifting either an axicon or an objective laterally; the slow imaging speed and severe aberrations made it hard to use in real-time volume imaging. In this article, we report recent improvements of TPFSM with newly designed scanner and imaging software, which allows 3D stereoscopic imaging without moving any of the optical components on the setup. This improvement has dramatically improved focusing qualities and imaging speed so that the TPFSM can be performed potentially in real-time to provide 3D visualization in scattering media without post image processing.

  13. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    PubMed

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  14. Real-time embedded atmospheric compensation for long-range imaging using the average bispectrum speckle method

    NASA Astrophysics Data System (ADS)

    Curt, Petersen F.; Bodnar, Michael R.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.

    2009-02-01

    While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.

  15. The Use of Multidimensional Image-Based Analysis to Accurately Monitor Cell Growth in 3D Bioreactor Culture

    PubMed Central

    Baradez, Marc-Olivier; Marshall, Damian

    2011-01-01

    The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells. PMID:22028809

  16. The use of multidimensional image-based analysis to accurately monitor cell growth in 3D bioreactor culture.

    PubMed

    Baradez, Marc-Olivier; Marshall, Damian

    2011-01-01

    The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells.

  17. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  18. Depth Structure from Asymmetric Shading Supports Face Discrimination

    PubMed Central

    Chen, Chien-Chung; Chen, Chin-Mei; Tyler, Christopher W.

    2013-01-01

    To examine the effect of illumination direction on the ability of observers to discriminate between faces, we manipulated the direction of illumination on scanned 3D face models. In order to dissociate the surface reflectance and illumination components of front-view face images, we introduce a symmetry algorithm that can separate the symmetric and asymmetric components of the face in both low and high spatial frequency bands. Based on this approach, hybrid faces stimuli were constructed with different combinations of symmetric and asymmetric spatial content. Discrimination results with these images showed that asymmetric illumination information biased face perception toward the structure of the shading component, while the symmetric illumination information had little, if any, effect. Measures of perceived depth showed that this property increased systematically with the asymmetric but not the symmetric low spatial frequency component. Together, these results suggest that (1) the asymmetric 3D shading information dramatically affects both the perceived facial information and the perceived depth of the facial structure; and (2) these effects both increase as the illumination direction is shifted to the side. Thus, our results support the hypothesis that face processing has a strong 3D component. PMID:23457484

  19. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  20. Imaging in Central Nervous System Drug Discovery.

    PubMed

    Gunn, Roger N; Rabiner, Eugenii A

    2017-01-01

    The discovery and development of central nervous system (CNS) drugs is an extremely challenging process requiring large resources, timelines, and associated costs. The high risk of failure leads to high levels of risk. Over the past couple of decades PET imaging has become a central component of the CNS drug-development process, enabling decision-making in phase I studies, where early discharge of risk provides increased confidence to progress a candidate to more costly later phase testing at the right dose level or alternatively to kill a compound through failure to meet key criteria. The so called "3 pillars" of drug survival, namely; tissue exposure, target engagement, and pharmacologic activity, are particularly well suited for evaluation by PET imaging. This review introduces the process of CNS drug development before considering how PET imaging of the "3 pillars" has advanced to provide valuable tools for decision-making on the critical path of CNS drug development. Finally, we review the advances in PET science of biomarker development and analysis that enable sophisticated drug-development studies in man. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. FBI Fingerprint Image Capture System High-Speed-Front-End throughput modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathke, P.M.

    1993-09-01

    The Federal Bureau of Investigation (FBI) has undertaken a major modernization effort called the Integrated Automated Fingerprint Identification System (IAFISS). This system will provide centralized identification services using automated fingerprint, subject descriptor, mugshot, and document processing. A high-speed Fingerprint Image Capture System (FICS) is under development as part of the IAFIS program. The FICS will capture digital and microfilm images of FBI fingerprint cards for input into a central database. One FICS design supports two front-end scanning subsystems, known as the High-Speed-Front-End (HSFE) and Low-Speed-Front-End, to supply image data to a common data processing subsystem. The production rate of themore » HSFE is critical to meeting the FBI`s fingerprint card processing schedule. A model of the HSFE has been developed to help identify the issues driving the production rate, assist in the development of component specifications, and guide the evolution of an operations plan. A description of the model development is given, the assumptions are presented, and some HSFE throughput analysis is performed.« less

  2. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE PAGES

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    2017-10-27

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  3. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  4. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.

  5. Inverting Image Data For Optical Testing And Alignment

    NASA Technical Reports Server (NTRS)

    Shao, Michael; Redding, David; Yu, Jeffrey W.; Dumont, Philip J.

    1993-01-01

    Data from images produced by slightly incorrectly figured concave primary mirror in telescope processed into estimate of spherical aberration of mirror, by use of algorithm finding nonlinear least-squares best fit between actual images and synthetic images produced by multiparameter mathematical model of telescope optical system. Estimated spherical aberration, in turn, converted into estimate of deviation of reflector surface from nominal precise shape. Algorithm devised as part of effort to determine error in surface figure of primary mirror of Hubble space telescope, so corrective lens designed. Modified versions of algorithm also used to find optical errors in other components of telescope or of other optical systems, for purposes of testing, alignment, and/or correction.

  6. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  7. Dental digital radiographic imaging.

    PubMed

    Mauriello, S M; Platin, E

    2001-01-01

    Radiographs are an important adjunct to providing oral health care for the total patient. Historically, radiographic images have been produced using film-based systems. However, in recent years, with the arrival of new technologies, many practitioners have begun to incorporate digital radiographic imaging into their practices. Since dental hygienists are primarily responsible for exposing and processing radiographs in the provision of dental hygiene care, it is imperative that they become knowledgeable on the use and application of digital imaging in patient care and record keeping. The purpose of this course is to provide a comprehensive overview of digital radiography in dentistry. Specific components addressed are technological features, diagnostic software, advantages and disadvantages, technique procedures, and legal implications.

  8. Spatial and spectral analysis of corneal epithelium injury using hyperspectral images

    NASA Astrophysics Data System (ADS)

    Md Noor, Siti Salwa; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-12-01

    Eye assessment is essential in preventing blindness. Currently, the existing methods to assess corneal epithelium injury are complex and require expert knowledge. Hence, we have introduced a non-invasive technique using hyperspectral imaging (HSI) and an image analysis algorithm of corneal epithelium injury. Three groups of images were compared and analyzed, namely healthy eyes, injured eyes, and injured eyes with stain. Dimensionality reduction using principal component analysis (PCA) was applied to reduce massive data and redundancies. The first 10 principal components (PCs) were selected for further processing. The mean vector of 10 PCs with 45 pairs of all combinations was computed and sent to two classifiers. A quadratic Bayes normal classifier (QDC) and a support vector classifier (SVC) were used in this study to discriminate the eleven eyes into three groups. As a result, the combined classifier of QDC and SVC showed optimal performance with 2D PCA features (2DPCA-QDSVC) and was utilized to classify normal and abnormal tissues, using color image segmentation. The result was compared with human segmentation. The outcome showed that the proposed algorithm produced extremely promising results to assist the clinician in quantifying a cornea injury.

  9. Flexible real-time magnetic resonance imaging framework.

    PubMed

    Santos, Juan M; Wright, Graham A; Pauly, John M

    2004-01-01

    The extension of MR imaging to new applications has demonstrated the limitations of the architecture of current real-time systems. Traditional real-time implementations provide continuous acquisition of data and modification of basic sequence parameters on the fly. We have extended the concept of real-time MRI by designing a system that drives the examinations from a real-time localizer and then gets reconfigured for different imaging modes. Upon operator request or automatic feedback the system can immediately generate a new pulse sequence or change fundamental aspects of the acquisition such as gradient waveforms excitation pulses and scan planes. This framework has been implemented by connecting a data processing and control workstation to a conventional clinical scanner. Key components on the design of this framework are the data communication and control mechanisms, reconstruction algorithms optimized for real-time and adaptability, flexible user interface and extensible user interaction. In this paper we describe the various components that comprise this system. Some of the applications implemented in this framework include real-time catheter tracking embedded in high frame rate real-time imaging and immediate switching between real-time localizer and high-resolution volume imaging for coronary angiography applications.

  10. Application of radiative image transfer theory to the assessment of the overall OTF and contrast degradation of an image in an inhomogeneous turbulent and turbid atmosphere

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1992-01-01

    A perturbation-theoretic approximation of the radiative transfer equation which neglects photon dispersion is used as a modeling basis for the propagation of the image of a self-luminous target through a turbulent atmosphere which also possesses inhomogeneously distributed turbidity along the propagation path. A contrast ratio is then introduced which provides an indicator of the relative contribution of the unscattered or coherent image component to that of the scattered or incoherent image component. Analytical expressions are then derived for the contrast ratio from the approximate form of the radiative transfer equation in the case of an inhomogeneously dispersed Joss thunderstorm rain distribution in the presence of turbulence. The case is clearly demonstrated for the need to consider a measure of the points of demarcation at which the dominant roles of the scattering processes due to turbidity and turbulence are exchanged. Such a measure can provide a performance parameter for the application of adaptive optics methods that are specific to the particular dominant scattering mechanism given the prevailing target size, total propagation length and overall propagation parameters.

  11. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  12. Time-lapse Raman imaging of osteoblast differentiation

    PubMed Central

    Hashimoto, Aya; Yamaguchi, Yoshinori; Chiu, Liang-da; Morimoto, Chiaki; Fujita, Katsumasa; Takedachi, Masahide; Kawata, Satoshi; Murakami, Shinya; Tamiya, Eiichi

    2015-01-01

    Osteoblastic mineralization occurs during the early stages of bone formation. During this mineralization, hydroxyapatite (HA), a major component of bone, is synthesized, generating hard tissue. Many of the mechanisms driving biomineralization remain unclear because the traditional biochemical assays used to investigate them are destructive techniques incompatible with viable cells. To determine the temporal changes in mineralization-related biomolecules at mineralization spots, we performed time-lapse Raman imaging of mouse osteoblasts at a subcellular resolution throughout the mineralization process. Raman imaging enabled us to analyze the dynamics of the related biomolecules at mineralization spots throughout the entire process of mineralization. Here, we stimulated KUSA-A1 cells to differentiate into osteoblasts and conducted time-lapse Raman imaging on them every 4 hours for 24 hours, beginning 5 days after the stimulation. The HA and cytochrome c Raman bands were used as markers for osteoblastic mineralization and apoptosis. From the Raman images successfully acquired throughout the mineralization process, we found that β-carotene acts as a biomarker that indicates the initiation of osteoblastic mineralization. A fluctuation of cytochrome c concentration, which indicates cell apoptosis, was also observed during mineralization. We expect time-lapse Raman imaging to help us to further elucidate osteoblastic mineralization mechanisms that have previously been unobservable. PMID:26211729

  13. Digital image processing techniques for the analysis of fuel sprays global pattern

    NASA Astrophysics Data System (ADS)

    Zakaria, Rami; Bryanston-Cross, Peter; Timmerman, Brenda

    2017-12-01

    We studied the fuel atomization process of two fuel injectors to be fitted in a new small rotary engine design. The aim was to improve the efficiency of the engine by optimizing the fuel injection system. Fuel sprays were visualised by an optical diagnostic system. Images of fuel sprays were produced under various testing conditions, by changing the line pressure, nozzle size, injection frequency, etc. The atomisers were a high-frequency microfluidic dispensing system and a standard low flow-rate fuel injector. A series of image processing procedures were developed in order to acquire information from the laser-scattering images. This paper presents the macroscopic characterisation of Jet fuel (JP8) sprays. We observed the droplet density distribution, tip velocity, and spray-cone angle against line-pressure and nozzle-size. The analysis was performed for low line-pressure (up to 10 bar) and short injection period (1-2 ms). Local velocity components were measured by applying particle image velocimetry (PIV) on double-exposure images. The discharge velocity was lower in the micro dispensing nozzle sprays and the tip penetration slowed down at higher rates compared to the gasoline injector. The PIV test confirmed that the gasoline injector produced sprays with higher velocity elements at the centre and the tip regions.

  14. Time-lapse Raman imaging of osteoblast differentiation

    NASA Astrophysics Data System (ADS)

    Hashimoto, Aya; Yamaguchi, Yoshinori; Chiu, Liang-Da; Morimoto, Chiaki; Fujita, Katsumasa; Takedachi, Masahide; Kawata, Satoshi; Murakami, Shinya; Tamiya, Eiichi

    2015-07-01

    Osteoblastic mineralization occurs during the early stages of bone formation. During this mineralization, hydroxyapatite (HA), a major component of bone, is synthesized, generating hard tissue. Many of the mechanisms driving biomineralization remain unclear because the traditional biochemical assays used to investigate them are destructive techniques incompatible with viable cells. To determine the temporal changes in mineralization-related biomolecules at mineralization spots, we performed time-lapse Raman imaging of mouse osteoblasts at a subcellular resolution throughout the mineralization process. Raman imaging enabled us to analyze the dynamics of the related biomolecules at mineralization spots throughout the entire process of mineralization. Here, we stimulated KUSA-A1 cells to differentiate into osteoblasts and conducted time-lapse Raman imaging on them every 4 hours for 24 hours, beginning 5 days after the stimulation. The HA and cytochrome c Raman bands were used as markers for osteoblastic mineralization and apoptosis. From the Raman images successfully acquired throughout the mineralization process, we found that β-carotene acts as a biomarker that indicates the initiation of osteoblastic mineralization. A fluctuation of cytochrome c concentration, which indicates cell apoptosis, was also observed during mineralization. We expect time-lapse Raman imaging to help us to further elucidate osteoblastic mineralization mechanisms that have previously been unobservable.

  15. 78 FR 68091 - Certain Marine Sonar Imaging Devices, Products Containing the Same, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-898] Certain Marine Sonar Imaging Devices... importation of certain marine sonar imaging devices, products containing the same, and components thereof by... marine sonar imaging devices, products containing the same, and components thereof by reason of...

  16. A philosophy for CNS radiotracer design.

    PubMed

    Van de Bittner, Genevieve C; Ricq, Emily L; Hooker, Jacob M

    2014-10-21

    Decades after its discovery, positron emission tomography (PET) remains the premier tool for imaging neurochemistry in living humans. Technological improvements in radiolabeling methods, camera design, and image analysis have kept PET in the forefront. In addition, the use of PET imaging has expanded because researchers have developed new radiotracers that visualize receptors, transporters, enzymes, and other molecular targets within the human brain. However, of the thousands of proteins in the central nervous system (CNS), researchers have successfully imaged fewer than 40 human proteins. To address the critical need for new radiotracers, this Account expounds on the decisions, strategies, and pitfalls of CNS radiotracer development based on our current experience in this area. We discuss the five key components of radiotracer development for human imaging: choosing a biomedical question, selection of a biological target, design of the radiotracer chemical structure, evaluation of candidate radiotracers, and analysis of preclinical imaging. It is particularly important to analyze the market of scientists or companies who might use a new radiotracer and carefully select a relevant biomedical question(s) for that audience. In the selection of a specific biological target, we emphasize how target localization and identity can constrain this process and discuss the optimal target density and affinity ratios needed for binding-based radiotracers. In addition, we discuss various PET test-retest variability requirements for monitoring changes in density, occupancy, or functionality for new radiotracers. In the synthesis of new radiotracer structures, high-throughput, modular syntheses have proved valuable, and these processes provide compounds with sites for late-stage radioisotope installation. As a result, researchers can manage the time constraints associated with the limited half-lives of isotopes. In order to evaluate brain uptake, a number of methods are available to predict bioavailability, blood-brain barrier (BBB) permeability, and the associated issues of nonspecific binding and metabolic stability. To evaluate the synthesized chemical library, researchers need to consider high-throughput affinity assays, the analysis of specific binding, and the importance of fast binding kinetics. Finally, we describe how we initially assess preclinical radiotracer imaging, using brain uptake, specific binding, and preliminary kinetic analysis to identify promising radiotracers that may be useful for human brain imaging. Although we discuss these five design components separately and linearly in this Account, in practice we develop new PET-based radiotracers using these design components nonlinearly and iteratively to develop new compounds in the most efficient way possible.

  17. X-ray Tomography and Chemical Imaging within Butterfly Wing Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Jianhua; Lee Yaochang; Tang, M.-T.

    2007-01-19

    The rainbow like color of butterfly wings is associated with the internal and surface structures of the wing scales. While the photonic structure of the scales is believed to diffract specific lights at different angle, there is no adequate probe directly answering the 3-D structures with sufficient spatial resolution. The NSRRC nano-transmission x-ray microscope (nTXM) with tens nanometers spatial resolution is able to image biological specimens without artifacts usually introduced in sophisticated sample staining processes. With the intrinsic deep penetration of x-rays, the nTXM is capable of nondestructively investigating the internal structures of fragile and soft samples. In this study,more » we imaged the structure of butterfly wing scales in 3-D view with 60 nm spatial resolution. In addition, synchrotron-radiation-based Fourier transform Infrared (FT-IR) microspectroscopy was employed to analyze the chemical components with spatial information of the butterfly wing scales. Based on the infrared spectral images, we suggest that the major components of scale structure were rich in protein and polysaccharide.« less

  18. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  19. Underwater image mosaicking and visual odometry

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott

    2017-05-01

    This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.

  20. Scientific and industrial challenges of developing nanoparticle-based theranostics and multiple-modality contrast agents for clinical application

    NASA Astrophysics Data System (ADS)

    Wáng, Yì Xiáng J.; Idée, Jean-Marc; Corot, Claire

    2015-10-01

    Designing of theranostics and dual or multi-modality contrast agents are currently two of the hottest topics in biotechnology and biomaterials science. However, for single entity theranostics, a right ratio of their diagnostic component and their therapeutic component may not always be realized in a composite suitable for clinical application. For dual/multiple modality molecular imaging agents, after in vivo administration, there is an optimal time window for imaging, when an agent is imaged by one modality, the pharmacokinetics of this agent may not allow imaging by another modality. Due to reticuloendothelial system clearance, efficient in vivo delivery of nanoparticles to the lesion site is sometimes difficult. The toxicity of these entities also remains poorly understood. While the medical need of theranostics is admitted, the business model remains to be established. There is an urgent need for a global and internationally harmonized re-evaluation of the approval and marketing processes of theranostics. However, a reasonable expectation exists that, in the near future, the current obstacles will be removed, thus allowing the wide use of these very promising agents.

  1. Detection of fresh bruises in apples by structured-illumination reflectance imaging

    NASA Astrophysics Data System (ADS)

    Lu, Yuzhen; Li, Richard; Lu, Renfu

    2016-05-01

    Detection of fresh bruises in apples remains a challenging task due to the absence of visual symptoms and significant chemical alterations of fruit tissues during the initial stage after the fruit have been bruised. This paper reports on a new structured-illumination reflectance imaging (SIRI) technique for enhanced detection of fresh bruises in apples. Using a digital light projector engine, sinusoidally-modulated illumination at the spatial frequencies of 50, 100, 150 and 200 cycles/m was generated. A digital camera was then used to capture the reflectance images from `Gala' and `Jonagold' apples, immediately after they had been subjected to two levels of bruising by impact tests. A conventional three-phase demodulation (TPD) scheme was applied to the acquired images for obtaining the planar (direct component or DC) and amplitude (alternating component or AC) images. Bruises were identified in the amplitude images with varying image contrasts, depending on spatial frequency. The bruise visibility was further enhanced through post-processing of the amplitude images. Furthermore, three spiral phase transform (SPT)-based demodulation methods, using single and two images and two phase-shifted images, were proposed for obtaining AC images. Results showed that the demodulation methods greatly enhanced the contrast and spatial resolution of the AC images, making it feasible to detect the fresh bruises that, otherwise, could not be achieved by conventional imaging technique with planar or uniform illumination. The effectiveness of image enhancement, however, varied with spatial frequency. Both 2-image and 2-phase SPT methods achieved the performance similar to that by conventional TPD. SIRI technique has demonstrated the capability of detecting fresh bruises in apples, and it has the potential as a new imaging modality for enhancing food quality and safety detection.

  2. Binding Isotherms and Time Courses Readily from Magnetic Resonance.

    PubMed

    Xu, Jia; Van Doren, Steven R

    2016-08-16

    Evidence is presented that binding isotherms, simple or biphasic, can be extracted directly from noninterpreted, complex 2D NMR spectra using principal component analysis (PCA) to reveal the largest trend(s) across the series. This approach renders peak picking unnecessary for tracking population changes. In 1:1 binding, the first principal component captures the binding isotherm from NMR-detected titrations in fast, slow, and even intermediate and mixed exchange regimes, as illustrated for phospholigand associations with proteins. Although the sigmoidal shifts and line broadening of intermediate exchange distorts binding isotherms constructed conventionally, applying PCA directly to these spectra along with Pareto scaling overcomes the distortion. Applying PCA to time-domain NMR data also yields binding isotherms from titrations in fast or slow exchange. The algorithm readily extracts from magnetic resonance imaging movie time courses such as breathing and heart rate in chest imaging. Similarly, two-step binding processes detected by NMR are easily captured by principal components 1 and 2. PCA obviates the customary focus on specific peaks or regions of images. Applying it directly to a series of complex data will easily delineate binding isotherms, equilibrium shifts, and time courses of reactions or fluctuations.

  3. Automated motion artifact removal for intravital microscopy, without a priori information.

    PubMed

    Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph

    2014-03-28

    Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber.

  4. Automated motion artifact removal for intravital microscopy, without a priori information

    PubMed Central

    Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph

    2014-01-01

    Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber. PMID:24676021

  5. Online damage inspection of optics for ATP system

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Jiang, Yu; Mao, Yao; Gan, Xun; Liu, Qiong

    2016-09-01

    In the Electro-Optical acquisition-tracking-pointing system (ATP), the optical components will be damaged with the several influencing factors. In this situation, the rate will increase sharply when the arrival of damage to some extent. As the complex processing techniques and long processing cycle of optical components, the damage will cause the great increase of the system development cost and cycle. Therefore, it is significant to detect the laser-induced damage in the ATP system. At present, the major research on the on-line damage detection technology of optical components is for the large optical system in the international. The relevant detection systems have complicated structures and many of components, and require enough installation space reserved, which do not apply for ATP system. To solve the problem mentioned before, This paper use a method based on machine vision to detect the damage on-line for the present ATP system. To start with, CCD and PC are used for image acquisition. Secondly, smoothing filters are used to restrain false damage points produced by noise. Then, with the shape feature included in the damage image, the OTSU Method which can define the best segmentation threshold automatically is used to achieve the goal to locate the damage regions. At last, we can supply some opinions for the lifetime of the optical components by analyzing the damage data, such as damage area, damage position. The method has the characteristics of few-detectors and simple-structures which can be installed without any changes of the original light path. With the method, experimental results show that it is stable and effective to achieve the goal of detecting the damage of optical components on-line in the ATP system.

  6. Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning.

    PubMed

    Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei

    2016-10-01

    Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.

  7. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  8. Do you also have problems with the file format syndrome?

    PubMed

    De Cuyper, B; Nyssen, E; Christophe, Y; Cornelis, J

    1991-11-01

    In a biomedical data processing environment, an essential requirement is the ability to integrate a large class of standard modules for the acquisition, processing and display of the (image) data. Our approach to the management and manipulation of the different data formats is based on the specification of a common standard for the representation of data formats, called 'data nature descriptions' to emphasise that this representation not only specifies the structure but also the contents of data objects (files). The idea behind this concept is to associate each hardware and software component that produces or uses medical data with a description of the data objects manipulated by that component. In our approach a special software module (a format convertor generator) takes care of the appropriate data format conversions, required when two or more components of the system exchange data.

  9. Split image optical display

    DOEpatents

    Veligdan, James T.

    2005-05-31

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  10. Split image optical display

    DOEpatents

    Veligdan, James T [Manorville, NY

    2007-05-29

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  11. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. How Does Adult Attachment Affect Human Recognition of Love-related and Sex-related Stimuli: An ERP Study

    PubMed Central

    Hou, Juan; Chen, Xin; Liu, Jinqun; Yao, Fangshu; Huang, Jiani; Ndasauka, Yamikani; Ma, Ru; Zhang, Yuting; Lan, Jing; Liu, Lu; Fang, Xiaoyi

    2016-01-01

    In the present study, we investigated the relationship among three emotion-motivation systems (adult attachment, romantic love, and sex). We recorded event-related potentials in 37 healthy volunteers who had experienced romantic love while they viewed SEX, LOVE, FRIEND, SPORT, and NEUTRAL images. We also measured adult attachment styles, level of passionate love and sexual attitudes. As expected, results showed that, firstly, response to love-related image-stimuli and sex-related image-stimuli on the electrophysiological data significantly different on N1, N2, and positive slow wave (PSW) components. Secondly, the different adult attachment styles affected individuals’ recognition processing in response to love-related and sex-related images, especially, to sex-related images. Further analysis showed that voltages elicited by fearful attachment style individuals were significantly lower than voltages elicited by secure and dismissing attachment style individuals on sex-related images at frontal sites, on N1 and N2 components. Thirdly, from behavior data, we found that adult attachment styles were not significantly related to any dimension of sexual attitudes but were significantly related to passionate love scale (PLS) total points. Thus, the behavior results were not in line with the electrophysiological results. The present study proved that adult attachment styles might mediate individuals’ lust and attraction systems. PMID:27199830

  13. How Does Adult Attachment Affect Human Recognition of Love-related and Sex-related Stimuli: An ERP Study.

    PubMed

    Hou, Juan; Chen, Xin; Liu, Jinqun; Yao, Fangshu; Huang, Jiani; Ndasauka, Yamikani; Ma, Ru; Zhang, Yuting; Lan, Jing; Liu, Lu; Fang, Xiaoyi

    2016-01-01

    In the present study, we investigated the relationship among three emotion-motivation systems (adult attachment, romantic love, and sex). We recorded event-related potentials in 37 healthy volunteers who had experienced romantic love while they viewed SEX, LOVE, FRIEND, SPORT, and NEUTRAL images. We also measured adult attachment styles, level of passionate love and sexual attitudes. As expected, results showed that, firstly, response to love-related image-stimuli and sex-related image-stimuli on the electrophysiological data significantly different on N1, N2, and positive slow wave (PSW) components. Secondly, the different adult attachment styles affected individuals' recognition processing in response to love-related and sex-related images, especially, to sex-related images. Further analysis showed that voltages elicited by fearful attachment style individuals were significantly lower than voltages elicited by secure and dismissing attachment style individuals on sex-related images at frontal sites, on N1 and N2 components. Thirdly, from behavior data, we found that adult attachment styles were not significantly related to any dimension of sexual attitudes but were significantly related to passionate love scale (PLS) total points. Thus, the behavior results were not in line with the electrophysiological results. The present study proved that adult attachment styles might mediate individuals' lust and attraction systems.

  14. SU-F-J-172: Hybrid MR/CT Compatible Phantom for MR-Only Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Lee, S; Song, K

    2016-06-15

    Purpose: Development of hybrid MR/CT compatible phantom was introduced to fully establish MR image only radiation treatment and this suggested technique using in-house developed hybrid MR/CT compatible phantom image would utilize to generate radiation treatment planning and perform dose calculation without multi-modal registration process or generation of pseudo CT. Methods: Fundamental characteristics for “hybrid MR/CT compatible phantom” was established: Relaxation times equivalent to human tissue, dielectric properties, homogeneous relaxation times, sufficient strength to fabricate a torso, ease of handling, a wide variety of density material for calibration, chemical and physical stability over an extended time. For this requirements, chemical componentmore » in each tested plug which would be tissue equivalent to human tissue on MR and CT image and production of phantom body and plug was performed. Chemical component has described below: Agaros, GdCl{sub 3}, NaN{sub 3}, NaCl, K{sub 2}Co{sub 3}, deionized-distilled water. Various mixture of chemical component to simulate human tissue on both MR and CT image was tested by measuring T1, T2 relaxation time and signal intensity (SI) on MR image and Hounsfield unit (HU) on CT and each value was compared. The hybrid MR/CT compatible phantom with 14 plugs was designed and has made. Total height and external diameter was decided by internal size of 32 channel MR head-coil. Results: Tissue-equivalent chemical component materials and hybrid MR/CT compatible phantom was developed. The range of T1, T2 relaxation time and SI on MR image, HU on CT was acquired and could be adjusted to correspond to simulated human tissue. Conclusion: Current result shows its possibility for MR-only based radiotherapy and the best mixing rate of chemical component for tissue-equivalent image on MR and CT was founded. However, additional technical issues remain to be overcome. Conversion of SI on MR image into HU and dose calculation based on converted MRI will be progressing.« less

  15. Development of an optical parallel logic device and a half-adder circuit for digital optical processing

    NASA Technical Reports Server (NTRS)

    Athale, R. A.; Lee, S. H.

    1978-01-01

    The paper describes the fabrication and operation of an optical parallel logic (OPAL) device which performs Boolean algebraic operations on binary images. Several logic operations on two input binary images were demonstrated using an 8 x 8 device with a CdS photoconductor and a twisted nematic liquid crystal. Two such OPAL devices can be interconnected to form a half-adder circuit which is one of the essential components of a CPU in a digital signal processor.

  16. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  17. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  18. Elastic and acoustic wavefield decompositions and application to reverse time migrations

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong

    P- and S-waves coexist in elastic wavefields, and separation between them is an essential step in elastic reverse-time migrations (RTMs). Unlike the traditional separation methods that use curl and divergence operators, which do not preserve the wavefield vector component information, we propose and compare two vector decomposition methods, which preserve the same vector components that exist in the input elastic wavefield. The amplitude and phase information is automatically preserved, so no amplitude or phase corrections are required. The decoupled propagation method is extended from elastic to viscoelastic wavefields. To use the decomposed P and S vector wavefields and generate PP and PS images, we create a new 2D migration context for isotropic, elastic RTM which includes PS vector decomposition; the propagation directions of both incident and reflected P- and S-waves are calculated directly from the stress and particle velocity definitions of the decomposed P- and S-wave Poynting vectors. Then an excitation-amplitude image condition that scales the receiver wavelet by the source vector magnitude produces angle-dependent images of PP and PS reflection coefficients with the correct polarities, polarization, and amplitudes. It thus simplifies the process of obtaining PP and PS angle-domain common-image gathers (ADCIGs); it is less effort to generate ADCIGs from vector data than from scalar data. Besides P- and S-waves decomposition, separations of up- and down-going waves are also a part of processing of multi-component recorded data and propagating wavefields. A complex trace based up/down separation approach is extended from acoustic to elastic, and combined with P- and S-wave decomposition by decoupled propagation. This eliminates the need for a Fourier transform over time, thereby significantly reducing the storage cost and improving computational efficiency. Wavefield decomposition is applied to both synthetic elastic VSP data and propagating wavefield snapshots. Poynting vectors obtained from the particle-velocity and stress fields after P/S and up/down decompositions are much more accurate than those without. The up/down separation algorithm is also applicable in acoustic RTMs, where both (forward-time extrapolated) source and (reverse-time extrapolated) receiver wavefields are decomposed into up-going and down-going parts. Together with the crosscorrelation imaging condition, four images (down-up, up-down, up-up and down-down) are generated, which facilitate the analysis of artifacts and the imaging ability of the four images. Artifacts may exist in all the decomposed images, but their positions and types are different. The causes of artifacts in different images are explained and illustrated with sketches and numerical tests.

  19. Non-contact Real-time heart rate measurements based on high speed circuit technology research

    NASA Astrophysics Data System (ADS)

    Wu, Jizhe; Liu, Xiaohua; Kong, Lingqin; Shi, Cong; Liu, Ming; Hui, Mei; Dong, Liquan; Zhao, Yuejin

    2015-08-01

    In recent years, morbidity and mortality of the cardiovascular or cerebrovascular disease, which threaten human health greatly, increased year by year. Heart rate is an important index of these diseases. To address this status, the paper puts forward a kind of simple structure, easy operation, suitable for large populations of daily monitoring non-contact heart rate measurement. In the method we use imaging equipment video sensitive areas. The changes of light intensity reflected through the image grayscale average. The light change is caused by changes in blood volume. We video the people face which include the sensitive areas (ROI), and use high-speed processing circuit to save the video as AVI format into memory. After processing the whole video of a period of time, we draw curve of each color channel with frame number as horizontal axis. Then get heart rate from the curve. We use independent component analysis (ICA) to restrain noise of sports interference, realized the accurate extraction of heart rate signal under the motion state. We design an algorithm, based on high-speed processing circuit, for face recognition and tracking to automatically get face region. We do grayscale average processing to the recognized image, get RGB three grayscale curves, and extract a clearer pulse wave curves through independent component analysis, and then we get the heart rate under the motion state. At last, by means of compare our system with Fingertip Pulse Oximeter, result show the system can realize a more accurate measurement, the error is less than 3 pats per minute.

  20. Time course of influence on the allocation of attentional resources caused by unconscious fearful faces.

    PubMed

    Jiang, Yunpeng; Wu, Xia; Saab, Rami; Xiao, Yi; Gao, Xiaorong

    2018-05-01

    Emotionally affective stimuli have priority in our visual processing even in the absence of conscious processing. However, the influence of unconscious emotional stimuli on our attentional resources remains unclear. Using the continuous flash suppression (CFS) paradigm, we concurrently recorded and analyzed visual event-related potential (ERP) components evoked by the images of suppressed fearful and neutral faces, and the steady-state visual evoked potential (SSVEP) elicited by dynamic Mondrian pictures. Fearful faces, relative to neutral faces, elicited larger late ERP components on parietal electrodes, indicating emotional expression processing without consciousness. More importantly, the presentation of a suppressed fearful face in the CFS resulted in a significantly greater decrease in SSVEP amplitude which started about 1-1.2 s after the face images first appeared. This suggests that the time course of the attentional bias occurs at about 1 s after the appearance of the fearful face and demonstrates that unconscious fearful faces may influence attentional resource allocation. Moreover, we proposed a new method that could eliminate the interaction of ERPs and SSVEPs when recorded concurrently. Copyright © 2018 Elsevier Ltd. All rights reserved.

Top