Sample records for structure objects imaged

  1. Modeling Of Object- And Scene-Prototypes With Hierarchically Structured Classes

    NASA Astrophysics Data System (ADS)

    Ren, Z.; Jensch, P.; Ameling, W.

    1989-03-01

    The success of knowledge-based image analysis methodology and implementation tools depends largely on an appropriately and efficiently built model wherein the domain-specific context information about and the inherent structure of the observed image scene have been encoded. For identifying an object in an application environment a computer vision system needs to know firstly the description of the object to be found in an image or in an image sequence, secondly the corresponding relationships between object descriptions within the image sequence. This paper presents models of image objects scenes by means of hierarchically structured classes. Using the topovisual formalism of graph and higraph, we are currently studying principally the relational aspect and data abstraction of the modeling in order to visualize the structural nature resident in image objects and scenes, and to formalize. their descriptions. The goal is to expose the structure of image scene and the correspondence of image objects in the low level image interpretation. process. The object-based system design approach has been applied to build the model base. We utilize the object-oriented programming language C + + for designing, testing and implementing the abstracted entity classes and the operation structures which have been modeled topovisually. The reference images used for modeling prototypes of objects and scenes are from industrial environments as'well as medical applications.

  2. Mobile and embedded fast high resolution image stitching for long length rectangular monochromatic objects with periodic structure

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry

    2018-04-01

    In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor

  3. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  4. FAST TRACK COMMUNICATION Far-field x-ray phase contrast imaging has no detailed information on the object

    NASA Astrophysics Data System (ADS)

    Kohn, V. G.; Argunova, T. S.; Je, J. H.

    2010-11-01

    We show that x-ray phase contrast images of some objects with a small cross-section diameter d satisfy a condition for a far-field approximation d Lt r1 where r1 = (λz)1/2, λ is the x-ray wavelength, z is the distance from the object to the detector. In this case the size of the image does not match the size of the object contrary to the edge detection technique. Moreover, the structure of the central fringes of the image is universal, i.e. it is independent of the object cross-section structure. Therefore, these images have no detailed information on the object.

  5. Systems and methods for estimating the structure and motion of an object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dani, Ashwin P; Dixon, Warren

    2015-11-03

    In one embodiment, the structure and motion of a stationary object are determined using two images and a linear velocity and linear acceleration of a camera. In another embodiment, the structure and motion of a stationary or moving object are determined using an image and linear and angular velocities of a camera.

  6. Method and apparatus for evaluating structural weakness in polymer matrix composites

    DOEpatents

    Wachter, E.A.; Fisher, W.G.

    1996-01-09

    A method and apparatus for evaluating structural weaknesses in polymer matrix composites is described. An object to be studied is illuminated with laser radiation and fluorescence emanating therefrom is collected and filtered. The fluorescence is then imaged and the image is studied to determine fluorescence intensity over the surface of the object being studied and the wavelength of maximum fluorescent intensity. Such images provide a map of the structural integrity of the part being studied and weaknesses, particularly weaknesses created by exposure of the object to heat, are readily visible in the image. 6 figs.

  7. Method and apparatus for evaluating structural weakness in polymer matrix composites

    DOEpatents

    Wachter, Eric A.; Fisher, Walter G.

    1996-01-01

    A method and apparatus for evaluating structural weaknesses in polymer matrix composites is described. An object to be studied is illuminated with laser radiation and fluorescence emanating therefrom is collected and filtered. The fluorescence is then imaged and the image is studied to determine fluorescence intensity over the surface of the object being studied and the wavelength of maximum fluorescent intensity. Such images provide a map of the structural integrity of the part being studied and weaknesses, particularly weaknesses created by exposure of the object to heat, are readily visible in the image.

  8. Compton imaging tomography technique for NDE of large nonuniform structures

    NASA Astrophysics Data System (ADS)

    Grubsky, Victor; Romanov, Volodymyr; Patton, Ned; Jannson, Tomasz

    2011-09-01

    In this paper we describe a new nondestructive evaluation (NDE) technique called Compton Imaging Tomography (CIT) for reconstructing the complete three-dimensional internal structure of an object, based on the registration of multiple two-dimensional Compton-scattered x-ray images of the object. CIT provides high resolution and sensitivity with virtually any material, including lightweight structures and organics, which normally pose problems in conventional x-ray computed tomography because of low contrast. The CIT technique requires only one-sided access to the object, has no limitation on the object's size, and can be applied to high-resolution real-time in situ NDE of large aircraft/spacecraft structures and components. Theoretical and experimental results will be presented.

  9. Salient Object Detection via Structured Matrix Decomposition.

    PubMed

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  10. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor

    PubMed Central

    Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo

    2017-01-01

    In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675

  11. Underwater binocular imaging of aerial objects versus the position of eyes relative to the flat water surface.

    PubMed

    Barta, András; Horváth, Gábor

    2003-12-01

    The apparent position, size, and shape of aerial objects viewed binocularly from water change as a result of the refraction of light at the water surface. Earlier studies of the refraction-distorted structure of the aerial binocular visual field of underwater observers were restricted to either vertically or horizontally oriented eyes. Here we calculate the position of the binocular image point of an aerial object point viewed by two arbitrarily positioned underwater eyes when the water surface is flat. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveae, the structure of the aerial binocular visual field is computed and visualized as a function of the relative positions of the eyes. We also analyze two erroneous representations of the underwater imaging of aerial objects that have occurred in the literature. It is demonstrated that the structure of the aerial binocular visual field of underwater observers distorted by refraction is more complex than has been thought previously.

  12. Digital 3D Microstructure Analysis of Concrete using X-Ray Micro Computed Tomography SkyScan 1173: A Preliminary Study

    NASA Astrophysics Data System (ADS)

    Latief, F. D. E.; Mohammad, I. H.; Rarasati, A. D.

    2017-11-01

    Digital imaging of a concrete sample using high resolution tomographic imaging by means of X-Ray Micro Computed Tomography (μ-CT) has been conducted to assess the characteristic of the sample’s structure. A standard procedure of image acquisition, reconstruction, image processing of the method using a particular scanning device i.e., the Bruker SkyScan 1173 High Energy Micro-CT are elaborated. A qualitative and a quantitative analysis were briefly performed on the sample to deliver some basic ideas of the capability of the system and the bundled software package. Calculation of total VOI volume, object volume, percent of object volume, total VOI surface, object surface, object surface/volume ratio, object surface density, structure thickness, structure separation, total porosity were conducted and analysed. This paper should serve as a brief description of how the device can produce the preferred image quality as well as the ability of the bundled software packages to help in performing qualitative and quantitative analysis.

  13. Drawing skill is related to the efficiency of encoding object structure.

    PubMed

    Perdreau, Florian; Cavanagh, Patrick

    2014-01-01

    Accurate drawing calls on many skills beyond simple motor coordination. A good internal representation of the target object's structure is necessary to capture its proportion and shape in the drawing. Here, we assess two aspects of the perception of object structure and relate them to participants' drawing accuracy. First, we assessed drawing accuracy by computing the geometrical dissimilarity of their drawing to the target object. We then used two tasks to evaluate the efficiency of encoding object structure. First, to examine the rate of temporal encoding, we varied presentation duration of a possible versus impossible test object in the fovea using two different test sizes (8° and 28°). More skilled participants were faster at encoding an object's structure, but this difference was not affected by image size. A control experiment showed that participants skilled in drawing did not have a general advantage that might have explained their faster processing for object structure. Second, to measure the critical image size for accurate classification in the periphery, we varied image size with possible versus impossible object tests centered at two different eccentricities (3° and 8°). More skilled participants were able to categorise object structure at smaller sizes, and this advantage did not change with eccentricity. A control experiment showed that the result could not be attributed to differences in visual acuity, leaving attentional resolution as a possible explanation. Overall, we conclude that drawing accuracy is related to faster encoding of object structure and better access to crowded details.

  14. Drawing skill is related to the efficiency of encoding object structure

    PubMed Central

    Perdreau, Florian; Cavanagh, Patrick

    2014-01-01

    Accurate drawing calls on many skills beyond simple motor coordination. A good internal representation of the target object's structure is necessary to capture its proportion and shape in the drawing. Here, we assess two aspects of the perception of object structure and relate them to participants' drawing accuracy. First, we assessed drawing accuracy by computing the geometrical dissimilarity of their drawing to the target object. We then used two tasks to evaluate the efficiency of encoding object structure. First, to examine the rate of temporal encoding, we varied presentation duration of a possible versus impossible test object in the fovea using two different test sizes (8° and 28°). More skilled participants were faster at encoding an object's structure, but this difference was not affected by image size. A control experiment showed that participants skilled in drawing did not have a general advantage that might have explained their faster processing for object structure. Second, to measure the critical image size for accurate classification in the periphery, we varied image size with possible versus impossible object tests centered at two different eccentricities (3° and 8°). More skilled participants were able to categorise object structure at smaller sizes, and this advantage did not change with eccentricity. A control experiment showed that the result could not be attributed to differences in visual acuity, leaving attentional resolution as a possible explanation. Overall, we conclude that drawing accuracy is related to faster encoding of object structure and better access to crowded details. PMID:25469216

  15. Contour sensitive saliency and depth application in image retargeting

    NASA Astrophysics Data System (ADS)

    Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia

    2018-04-01

    Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.

  16. Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting

    NASA Astrophysics Data System (ADS)

    Palenichka, Roman M.; Zaremba, Marek B.

    2003-03-01

    Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.

  17. A light and faster regional convolutional neural network for object detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan

    2018-07-01

    Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.

  18. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  19. Fast and objective detection and analysis of structures in downhole images

    NASA Astrophysics Data System (ADS)

    Wedge, Daniel; Holden, Eun-Jung; Dentith, Mike; Spadaccini, Nick

    2017-09-01

    Downhole acoustic and optical televiewer images, and formation microimager (FMI) logs are important datasets for structural and geotechnical analyses for the mineral and petroleum industries. Within these data, dipping planar structures appear as sinusoids, often in incomplete form and in abundance. Their detection is a labour intensive and hence expensive task and as such is a significant bottleneck in data processing as companies may have hundreds of kilometres of logs to process each year. We present an image analysis system that harnesses the power of automated image analysis and provides an interactive user interface to support the analysis of televiewer images by users with different objectives. Our algorithm rapidly produces repeatable, objective results. We have embedded it in an interactive workflow to complement geologists' intuition and experience in interpreting data to improve efficiency and assist, rather than replace the geologist. The main contributions include a new image quality assessment technique for highlighting image areas most suited to automated structure detection and for detecting boundaries of geological zones, and a novel sinusoid detection algorithm for detecting and selecting sinusoids with given confidence levels. Further tools are provided to perform rapid analysis of and further detection of structures e.g. as limited to specific orientations.

  20. Information Object Definition–based Unified Modeling Language Representation of DICOM Structured Reporting

    PubMed Central

    Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K.P.

    2002-01-01

    Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification. PMID:11751804

  1. A neighboring structure reconstructed matching algorithm based on LARK features

    NASA Astrophysics Data System (ADS)

    Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa

    2015-11-01

    Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.

  2. Single-shot three-dimensional reconstruction based on structured light line pattern

    NASA Astrophysics Data System (ADS)

    Wang, ZhenZhou; Yang, YongMing

    2018-07-01

    Reconstruction of the object by single-shot is of great importance in many applications, in which the object is moving or its shape is non-rigid and changes irregularly. In this paper, we propose a single-shot structured light 3D imaging technique that calculates the phase map from the distorted line pattern. This technique makes use of the image processing techniques to segment and cluster the projected structured light line pattern from one single captured image. The coordinates of the clustered lines are extracted to form a low-resolution phase matrix which is then transformed to full-resolution phase map by spline interpolation. The 3D shape of the object is computed from the full-resolution phase map and the 2D camera coordinates. Experimental results show that the proposed method was able to reconstruct the three-dimensional shape of the object robustly from one single image.

  3. Object Recognition and Random Image Structure Evolution

    ERIC Educational Resources Information Center

    Sadr, Jvid; Sinha, Pawan

    2004-01-01

    We present a technique called Random Image Structure Evolution (RISE) for use in experimental investigations of high-level visual perception. Potential applications of RISE include the quantitative measurement of perceptual hysteresis and priming, the study of the neural substrates of object perception, and the assessment and detection of subtle…

  4. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Guan, Chun (Inventor); Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor)

    2008-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  5. Method and apparatus for molecular imaging using x-rays at resonance wavelengths

    DOEpatents

    Chapline, G.F. Jr.

    Holographic x-ray images are produced representing the molecular structure of a microscopic object, such as a living cell, by directing a beam of coherent x-rays upon the object to produce scattering of the x-rays by the object, producing interference on a recording medium between the scattered x-rays from the object and unscattered coherent x-rays and thereby producing holograms on the recording surface, and establishing the wavelength of the coherent x-rays to correspond with a molecular resonance of a constituent of such object and thereby greatly improving the contrast, sensitivity and resolution of the holograms as representations of molecular structures involving such constituent. For example, the coherent x-rays may be adjusted to the molecular resonant absorption line of nitrogen at about 401.3 eV to produce holographic images featuring molecular structures involving nitrogen.

  6. Method and apparatus for molecular imaging using X-rays at resonance wavelengths

    DOEpatents

    Chapline, Jr., George F.

    1985-01-01

    Holographic X-ray images are produced representing the molecular structure of a microscopic object, such as a living cell, by directing a beam of coherent X-rays upon the object to produce scattering of the X-rays by the object, producing interference on a recording medium between the scattered X-rays from the object and unscattered coherent X-rays and thereby producing holograms on the recording surface, and establishing the wavelength of the coherent X-rays to correspond with a molecular resonance of a constituent of such object and thereby greatly improving the contrast, sensitivity and resolution of the holograms as representations of molecular structures involving such constituent. For example, the coherent X-rays may be adjusted to the molecular resonant absorption line of nitrogen at about 401.3 eV to produce holographic images featuring molecular structures involving nitrogen.

  7. Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.

    1991-01-01

    Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.

  8. Looking into the water with oblique head tilting: revision of the aerial binocular imaging of underwater objects.

    PubMed

    Horváth, Gábor; Buchta, Krisztián; Varjú, Dezsö

    2003-06-01

    It is a well-known phenomenon that when we look into the water with two aerial eyes, both the apparent position and the apparent shape of underwater objects are different from the real ones because of refraction at the water surface. Earlier studies of the refraction-distorted structure of the underwater binocular visual field of aerial observers were restricted to either vertically or horizontally oriented eyes. We investigate a generalized version of this problem: We calculate the position of the binocular image point of an underwater object point viewed by two arbitrarily positioned aerial eyes, including oblique orientations of the eyes relative to the flat water surface. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveas, the structure of the underwater binocular visual field is computed and visualized in different ways as a function of the relative positions of the eyes. We show that a revision of certain earlier treatments of the aerial imaging of underwater objects is necessary. We analyze and correct some widespread erroneous or incomplete representations of this classical geometric optical problem that occur in different textbooks. Improving the theory of aerial binocular imaging of underwater objects, we demonstrate that the structure of the underwater binocular visual field of aerial observers distorted by refraction is more complex than has been thought previously.

  9. Information object definition-based unified modeling language representation of DICOM structured reporting: a case study of transcoding DICOM to XML.

    PubMed

    Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K P

    2002-01-01

    Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification.

  10. Orientation estimation of anatomical structures in medical images for object recognition

    NASA Astrophysics Data System (ADS)

    Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian

    2011-03-01

    Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.

  11. Visualization and manipulating the image of a formal data structure (FDS)-based database

    NASA Astrophysics Data System (ADS)

    Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien

    1994-08-01

    A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.

  12. Reflective type objective based spectral-domain phase-sensitive optical coherence tomography for high-sensitive structural and functional imaging of cochlear microstructures through intact bone of an excised guinea pig cochlea

    NASA Astrophysics Data System (ADS)

    Subhash, Hrebesh M.; Wang, Ruikang K.; Chen, Fangyi; Nuttall, Alfred L.

    2013-03-01

    Most of the optical coherence tomographic (OCT) systems for high resolution imaging of biological specimens are based on refractive type microscope objectives, which are optimized for specific wave length of the optical source. In this study, we present the feasibility of using commercially available reflective type objective for high sensitive and high resolution structural and functional imaging of cochlear microstructures of an excised guinea pig through intact temporal bone. Unlike conventional refractive type microscopic objective, reflective objective are free from chromatic aberrations due to their all-reflecting nature and can support a broadband of spectrum with very high light collection efficiency.

  13. Fringe image processing based on structured light series

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Li, Hongyan

    2009-11-01

    The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.

  14. Method of synthesis of abstract images with high self-similarity

    NASA Astrophysics Data System (ADS)

    Matveev, Nikolay V.; Shcheglov, Sergey A.; Romanova, Galina E.; Koneva, Ð.¢atiana A.

    2017-06-01

    Abstract images with high self-similarity could be used for drug-free stress therapy. This based on the fact that a complex visual environment has a high affective appraisal. To create such an image we can use the setup based on the three laser sources of small power and different colors (Red, Green, Blue), the image is the pattern resulting from the reflecting and refracting by the complicated form object placed into the laser ray paths. The images were obtained experimentally which showed the good therapy effect. However, to find and to choose the object which gives needed image structure is very difficult and requires many trials. The goal of the work is to develop a method and a procedure of finding the object form which if placed into the ray paths can provide the necessary structure of the image In fact the task means obtaining the necessary irradiance distribution on the given surface. Traditionally such problems are solved using the non-imaging optics methods. In the given case this task is very complicated because of the complicated structure of the illuminance distribution and its high non-linearity. Alternative way is to use the projected image of a mask with a given structure. We consider both ways and discuss how they can help to speed up the synthesis procedure for the given abstract image of the high self-similarity for the setups of drug-free therapy.

  15. Imaging of sub-wavelength structures radiating coherently near microspheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maslov, Alexey V., E-mail: avmaslov@yandex.ru; Astratov, Vasily N., E-mail: astratov@uncc.edu

    2016-02-01

    Using a two-dimensional model, we show that the optical images of a sub-wavelength object depend strongly on the excitation of its electromagnetic modes. There exist modes that enable the resolution of the object features smaller than the classical diffraction limit, in particular, due to the destructive interference. We propose to use such modes for super-resolution of resonant structures such as coupled cavities, metal dimers, or bowties. A dielectric microsphere in contact with the object forms its magnified image in a wide range of the virtual image plane positions. It is also suggested that the resonances may significantly affect the resolutionmore » quantification in recent experimental studies.« less

  16. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  17. Edge detection based on computational ghost imaging with structured illuminations

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin

    2018-03-01

    Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.

  18. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.

  19. Objective Quality Assessment for Color-to-Gray Image Conversion.

    PubMed

    Ma, Kede; Zhao, Tiesong; Zeng, Kai; Wang, Zhou

    2015-12-01

    Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.

  20. A novel 3D shape descriptor for automatic retrieval of anatomical structures from medical images

    NASA Astrophysics Data System (ADS)

    Nunes, Fátima L. S.; Bergamasco, Leila C. C.; Delmondes, Pedro H.; Valverde, Miguel A. G.; Jackowski, Marcel P.

    2017-03-01

    Content-based image retrieval (CBIR) aims at retrieving from a database objects that are similar to an object provided by a query, by taking into consideration a set of extracted features. While CBIR has been widely applied in the two-dimensional image domain, the retrieval of3D objects from medical image datasets using CBIR remains to be explored. In this context, the development of descriptors that can capture information specific to organs or structures is desirable. In this work, we focus on the retrieval of two anatomical structures commonly imaged by Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) techniques, the left ventricle of the heart and blood vessels. Towards this aim, we developed the Area-Distance Local Descriptor (ADLD), a novel 3D local shape descriptor that employs mesh geometry information, namely facet area and distance from centroid to surface, to identify shape changes. Because ADLD only considers surface meshes extracted from volumetric medical images, it substantially diminishes the amount of data to be analyzed. A 90% precision rate was obtained when retrieving both convex (left ventricle) and non-convex structures (blood vessels), allowing for detection of abnormalities associated with changes in shape. Thus, ADLD has the potential to aid in the diagnosis of a wide range of vascular and cardiac diseases.

  1. A laboratory system for element specific hyperspectral X-ray imaging.

    PubMed

    Jacques, Simon D M; Egan, Christopher K; Wilson, Matthew D; Veale, Matthew C; Seller, Paul; Cernik, Robert J

    2013-02-21

    X-ray tomography is a ubiquitous tool used, for example, in medical diagnosis, explosives detection or to check structural integrity of complex engineered components. Conventional tomographic images are formed by measuring many transmitted X-rays and later mathematically reconstructing the object, however the structural and chemical information carried by scattered X-rays of different wavelengths is not utilised in any way. We show how a very simple; laboratory-based; high energy X-ray system can capture these scattered X-rays to deliver 3D images with structural or chemical information in each voxel. This type of imaging can be used to separate and identify chemical species in bulk objects with no special sample preparation. We demonstrate the capability of hyperspectral imaging by examining an electronic device where we can clearly distinguish the atomic composition of the circuit board components in both fluorescence and transmission geometries. We are not only able to obtain attenuation contrast but also to image chemical variations in the object, potentially opening up a very wide range of applications from security to medical diagnostics.

  2. Multi-energy method of digital radiography for imaging of biological objects

    NASA Astrophysics Data System (ADS)

    Ryzhikov, V. D.; Naydenov, S. V.; Opolonin, O. D.; Volkov, V. G.; Smith, C. F.

    2016-03-01

    This work has been dedicated to the search for a new possibility to use multi-energy digital radiography (MER) for medical applications. Our work has included both theoretical and experimental investigations of 2-energy (2E) and 3- energy (3D) radiography for imaging the structure of biological objects. Using special simulation methods and digital analysis based on the X-ray interaction energy dependence for each element of importance to medical applications in the X-ray range of energy up to 150 keV, we have implemented a quasi-linear approximation for the energy dependence of the X-ray linear mass absorption coefficient μm (E) that permits us to determine the intrinsic structure of the biological objects. Our measurements utilize multiple X-ray tube voltages (50, 100, and 150 kV) with Al and Cu filters of different thicknesses to achieve 3-energy X-ray examination of objects. By doing so, we are able to achieve significantly improved imaging quality of the structure of the subject biological objects. To reconstruct and visualize the final images, we use both two-dimensional (2D) and three-dimensional (3D) palettes of identification. The result is a 2E and/or 3E representation of the object with color coding of each pixel according to the data outputs. Following the experimental measurements and post-processing, we produce a 3D image of the biological object - in the case of our trials, fragments or parts of chicken and turkey.

  3. Integrated VR platform for 3D and image-based models: a step toward interactive image-based virtual environments

    NASA Astrophysics Data System (ADS)

    Yoon, Jayoung; Kim, Gerard J.

    2003-04-01

    Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  4. Enhancement of the visibility of objects located below the surface of a scattering medium

    DOEpatents

    Demos, Stavros

    2013-11-19

    Techniques are provided for enhancing the visibility of objects located below the surface of a scattering medium such as tissue, water and smoke. Examples of such an object include a vein located below the skin, a mine located below the surface of the sea and a human in a location covered by smoke. The enhancement of the image contrast of a subsurface structure is based on the utilization of structured illumination. In the specific application of this invention to image the veins in the arm or other part of the body, the issue of how to control the intensity of the image of a metal object (such as a needle) that must be inserted into the vein is also addressed.

  5. Objective quality assessment for multiexposure multifocus image fusion.

    PubMed

    Hassen, Rania; Wang, Zhou; Salama, Magdy M A

    2015-09-01

    There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.

  6. ARCADIA: a system for the integration of angiocardiographic data and images by an object-oriented DBMS.

    PubMed

    Pinciroli, F; Combi, C; Pozzi, G

    1995-02-01

    Use of data base techniques to store medical records has been going on for more than 40 years. Some aspects still remain unresolved, e.g., the management of textual data and image data within a single system. Object-orientation techniques applied to a database management system (DBMS) allow the definition of suitable data structures (e.g., to store digital images): some facilities allow the use of predefined structures when defining new ones. Currently available object-oriented DBMS, however, still need improvements both in the schema update and in the query facilities. This paper describes a prototype of a medical record that includes some multimedia features, managing both textual and image data. The prototype here described considers data from the medical records of patients subjected to percutaneous transluminal coronary artery angioplasty. We developed it on a Sun workstation with a Unix operating system and ONTOS as an object-oriented DBMS.

  7. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    NASA Astrophysics Data System (ADS)

    Zheng, Yipeng; Tan, Wenjiang; Si, Jinhai; Ren, YuHu; Xu, Shichao; Tong, Junyi; Hou, Xun

    2016-09-01

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. This imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.

  8. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yipeng; Tan, Wenjiang, E-mail: tanwenjiang@mail.xjtu.edu.cn; Si, Jinhai

    2016-09-07

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. Thismore » imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.« less

  9. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2010-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  10. a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects

    NASA Astrophysics Data System (ADS)

    Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.

    2015-12-01

    The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.

  11. Compton imaging tomography for nondestructive evaluation of spacecraft thermal protection systems

    NASA Astrophysics Data System (ADS)

    Romanov, Volodymyr; Burke, Eric; Grubsky, Victor

    2017-02-01

    Here we present new results of in situ nondestructive evaluation (NDE) of spacecraft thermal protection system materials obtained with POC-developed NDE tool based on a novel Compton Imaging Tomography (CIT) technique recently pioneered and patented by Physical Optics Corporation (POC). In general, CIT provides high-resolution three-dimensional Compton scattered X-ray imaging of the internal structure of evaluated objects, using a set of acquired two-dimensional Compton scattered X-ray images of consecutive cross sections of these objects. Unlike conventional computed tomography, CIT requires only one-sided access to objects, has no limitation on the dimensions and geometry of the objects, and can be applied to large multilayer non-uniform objects with complicated geometries. Also, CIT does not require any contact with the objects being imaged during its application.

  12. Regional shape-based feature space for segmenting biomedical images using neural networks

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Gopal; Hoford, John D.; Hoffman, Eric A.

    1993-07-01

    In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.

  13. Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.

    PubMed

    Dzyubak, Oleksandr P; Ritman, Erik L

    2011-01-01

    The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.

  14. Single-shot ultrafast tomographic imaging by spectral multiplexing

    NASA Astrophysics Data System (ADS)

    Matlis, N. H.; Axley, A.; Leemans, W. P.

    2012-10-01

    Computed tomography has profoundly impacted science, medicine and technology by using projection measurements scanned over multiple angles to permit cross-sectional imaging of an object. The application of computed tomography to moving or dynamically varying objects, however, has been limited by the temporal resolution of the technique, which is set by the time required to complete the scan. For objects that vary on ultrafast timescales, traditional scanning methods are not an option. Here we present a non-scanning method capable of resolving structure on femtosecond timescales by using spectral multiplexing of a single laser beam to perform tomographic imaging over a continuous range of angles simultaneously. We use this technique to demonstrate the first single-shot ultrafast computed tomography reconstructions and obtain previously inaccessible structure and position information for laser-induced plasma filaments. This development enables real-time tomographic imaging for ultrafast science, and offers a potential solution to the challenging problem of imaging through scattering surfaces.

  15. Basic level scene understanding: categories, attributes and structures

    PubMed Central

    Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude

    2013-01-01

    A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590

  16. Using composite sinusoidal patterns in structured-illumination reflectance imaging (SIRI) for enhanced detection of defects in food

    USDA-ARS?s Scientific Manuscript database

    Structured-illumination reflectance imaging (SIRI) provides a new means for enhanced detection of defects in horticultural products. Implementing the technique relies on retrieving amplitude images by illuminating the object with sinusoidal patterns of single spatial frequencies, which, however, are...

  17. Modelling the structure of sludge aggregates

    PubMed Central

    Smoczyński, Lech; Ratnaweera, Harsha; Kosobucka, Marta; Smoczyński, Michał; Kalinowski, Sławomir; Kvaal, Knut

    2016-01-01

    ABSTRACT The structure of sludge is closely associated with the process of wastewater treatment. Synthetic dyestuff wastewater and sewage were coagulated using the PAX and PIX methods, and electro-coagulated on aluminium electrodes. The processes of wastewater treatment were supported with an organic polymer. The images of surface structures of the investigated sludge were obtained using scanning electron microscopy (SEM). The software image analysis permitted obtaining plots log A vs. log P, wherein A is the surface area and P is the perimeter of the object, for individual objects comprised in the structure of the sludge. The resulting database confirmed the ‘self-similarity’ of the structural objects in the studied groups of sludge, which enabled calculating their fractal dimension and proposing models for these objects. A quantitative description of the sludge aggregates permitted proposing a mechanism of the processes responsible for their formation. In the paper, also, the impact of the structure of the investigated sludge on the process of sedimentation, and dehydration of the thickened sludge after sedimentation, was discussed. PMID:26549812

  18. A spherical aberration-free microscopy system for live brain imaging.

    PubMed

    Ue, Yoshihiro; Monai, Hiromu; Higuchi, Kaori; Nishiwaki, Daisuke; Tajima, Tetsuya; Okazaki, Kenya; Hama, Hiroshi; Hirase, Hajime; Miyawaki, Atsushi

    2018-06-02

    The high-resolution in vivo imaging of mouse brain for quantitative analysis of fine structures, such as dendritic spines, requires objectives with high numerical apertures (NAs) and long working distances (WDs). However, this imaging approach is often hampered by spherical aberration (SA) that results from the mismatch of refractive indices in the optical path and becomes more severe with increasing depth of target from the brain surface. Whereas a revolving objective correction collar has been designed to compensate SA, its adjustment requires manual operation and is inevitably accompanied by considerable focal shift, making it difficult to acquire the best image of a given fluorescent object. To solve the problems, we have created an objective-attached device and formulated a fast iterative algorithm for the realization of an automatic SA compensation system. The device coordinates the collar rotation and the Z-position of an objective, enabling correction collar adjustment while stably focusing on a target. The algorithm provides the best adjustment on the basis of the calculated contrast of acquired images. Together, they enable the system to compensate SA at a given depth. As proof of concept, we applied the SA compensation system to in vivo two-photon imaging with a 25 × water-immersion objective (NA, 1.05; WD, 2 mm). It effectively reduced SA regardless of location, allowing quantitative and reproducible analysis of fine structures of YFP-labeled neurons in the mouse cerebral cortical layers. Interestingly, although the cortical structure was optically heterogeneous along the z-axis, the refractive index of each layer could be assessed on the basis of the compensation degree. It was also possible to make fully corrected three-dimensional reconstructions of YFP-labeled neurons in live brain samples. Our SA compensation system, called Deep-C, is expected to bring out the best in all correction-collar-equipped objectives for imaging deep regions of heterogeneous tissues. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Retrieving high-resolution images over the Internet from an anatomical image database

    NASA Astrophysics Data System (ADS)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  20. Laser-induced fluorescence imaging of subsurface tissue structures with a volume holographic spatial-spectral imaging system.

    PubMed

    Luo, Yuan; Gelsinger-Austin, Paul J; Watson, Jonathan M; Barbastathis, George; Barton, Jennifer K; Kostuk, Raymond K

    2008-09-15

    A three-dimensional imaging system incorporating multiplexed holographic gratings to visualize fluorescence tissue structures is presented. Holographic gratings formed in volume recording materials such as a phenanthrenquinone poly(methyl methacrylate) photopolymer have narrowband angular and spectral transmittance filtering properties that enable obtaining spatial-spectral information within an object. We demonstrate this imaging system's ability to obtain multiple depth-resolved fluorescence images simultaneously.

  1. Determination of the object surface function by structured light: application to the study of spinal deformities

    NASA Astrophysics Data System (ADS)

    Buendía, M.; Salvador, R.; Cibrián, R.; Laguia, M.; Sotoca, J. M.

    1999-01-01

    The projection of structured light is a technique frequently used to determine the surface shape of an object. In this paper, a new procedure is described that efficiently resolves the correspondence between the knots of the projected grid and those obtained on the object when the projection is made. The method is based on the use of three images of the projected grid. In two of them the grid is projected over a flat surface placed, respectively, before and behind the object; both images are used for calibration. In the third image the grid is projected over the object. It is not reliant on accurate determination of the camera and projector pair relative to the grid and object. Once the method is calibrated, we can obtain the surface function by just analysing the projected grid on the object. The procedure is especially suitable for the study of objects without discontinuities or large depth gradients. It can be employed for determining, in a non-invasive way, the patient's back surface function. Symmetry differences permit a quantitative diagnosis of spinal deformities such as scoliosis.

  2. Evanescent-Wave Filtering in Images Using Remote Terahertz Structured Illumination

    NASA Astrophysics Data System (ADS)

    Flammini, M.; Pontecorvo, E.; Giliberti, V.; Rizza, C.; Ciattoni, A.; Ortolani, M.; DelRe, E.

    2017-11-01

    Imaging with structured illumination allows for the retrieval of subwavelength features of an object by conversion of evanescent waves into propagating waves. In conditions in which the object plane and the structured-illumination plane do not coincide, this conversion process is subject to progressive filtering of the components with high spatial frequency when the distance between the two planes increases, until the diffraction-limited lateral resolution is restored when the distance exceeds the extension of evanescent waves. We study the progressive filtering of evanescent waves by developing a remote super-resolution terahertz imaging system operating at a wavelength λ =1.00 mm , based on a freestanding knife edge and a reflective confocal terahertz microscope. In the images recorded with increasing knife-edge-to-object-plane distance, we observe the transition from a super-resolution of λ /17 ≃60 μ m to the diffraction-limited lateral resolution of Δ x ≃λ expected for our confocal microscope. The extreme nonparaxial conditions are analyzed in detail, exploiting the fact that, in the terahertz frequency range, the knife edge can be positioned at a variable subwavelength distance from the object plane. Electromagnetic simulations of radiation scattering by the knife edge reproduce the experimental super-resolution achieved.

  3. VizieR Online Data Catalog: M33 SNR candidates properties (Lee+, 2014)

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Lee, M. G.

    2017-04-01

    We utilized the Hα and [S II] images in the LGGS to find new M33 remnants. The LGGS covered three 36' square fields of M33. We subtracted continuum sources from the narrowband images using R-band images. We smoothed the images with better seeing to match the point-spread function in the images with worse seeing, using the IRAF task psfmatch. We then scaled and subtracted the resulting continuum images from narrowband images. We selected M33 remnants considering three criteria: emission-line ratio ([S II]/Hα), the morphological structure, and the absence of blue stars inside the sources. Details are described in L14 (Lee et al. 2014ApJ...786..130L). We detected objects with [S II]/Hα>0.4 in emission-line ratio maps, and selected objects with round or shell structures in each narrowband image. As a result, we chose 435 sources. (2 data files).

  4. Tradeoff between noise reduction and inartificial visualization in a model-based iterative reconstruction algorithm on coronary computed tomography angiography.

    PubMed

    Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki

    2018-05-01

    We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.

  5. An efficient direct method for image registration of flat objects

    NASA Astrophysics Data System (ADS)

    Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei

    2017-09-01

    Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.

  6. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were applied to different archaeological sites in Turkmenistan (Nisa) and in Iraq (Babylon); a further change detection analysis was applied to the Babylon site using two HR images as a pre-post second gulf war. We had different results or outputs, taking into consideration the fact that the operative scale of sensed data determines the final result of the elaboration and the output of the information quality, because each of them was sensitive to specific shapes in each input image, we had mapped linear and nonlinear objects, updating archaeological cartography, automatic change detection analysis for the Babylon site. The discussion of these techniques has the objective to provide the archaeological team with new instruments for the orientation and the planning of a remote sensing application.

  7. Multidimensional Shape Similarity in the Development of Visual Object Classification

    ERIC Educational Resources Information Center

    Mash, Clay

    2006-01-01

    The current work examined age differences in the classification of novel object images that vary in continuous dimensions of structural shape. The structural dimensions employed are two that share a privileged status in the visual analysis and representation of objects: the shape of discrete prominent parts and the attachment positions of those…

  8. Soil structure characterized using computed tomographic images

    Treesearch

    Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek

    2003-01-01

    Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...

  9. Science Objectives for a Soft X-ray Mission

    NASA Astrophysics Data System (ADS)

    Sibeck, D. G.; Connor, H. K.; Collier, M. R.; Collado-Vega, Y. M.; Walsh, B.

    2016-12-01

    When high charge state solar wind ions exchange electrons with exospheric neutrals, soft X-rays are emitted. In conjunction with flight- proven wide field-of-view soft X-ray imagers employing lobster-eye optics, recent simulations demonstrate the feasibility of imaging magnetospheric density structures such as the bow shock, magnetopause, and cusps. This presentation examines the Heliospheric scientific objectives that such imagers can address. Principal amongst these is the nature of reconnection at the dayside magnetopause: steady or transient, widespread or localized, component or antiparallel as a function of solar wind conditions. However, amongst many other objectives, soft X-ray imagers can provide crucial information concerning the structure of the bow shock as a function of solar wind Mach number and IMF orientation, the presence or absence of a depletion layer, the occurrence of Kelvin-Helmholtz or pressure-pulse driven magnetopause boundary waves, and the effects of radial IMF orientations and the foreshock upon bow shock and magnetopause location.

  10. Stereoscopic radiographic images with thermal neutrons

    NASA Astrophysics Data System (ADS)

    Silvani, M. I.; Almeida, G. L.; Rogers, J. D.; Lopes, R. T.

    2011-10-01

    Spatial structure of an object can be perceived by the stereoscopic vision provided by eyes or by the parallax produced by movement of the object with regard to the observer. For an opaque object, a technique to render it transparent should be used, in order to make visible the spatial distribution of its inner structure, for any of the two approaches used. In this work, a beam of thermal neutrons at the main port of the Argonauta research reactor of the Instituto de Engenharia Nuclear in Rio de Janeiro/Brazil has been used as radiation to render the inspected objects partially transparent. A neutron sensitive Imaging Plate has been employed as a detector and after exposure it has been developed by a reader using a 0.5 μm laser beam, which defines the finest achievable spatial resolution of the acquired digital image. This image, a radiographic attenuation map of the object, does not represent any specific cross-section but a convoluted projection for each specific attitude of the object with regard to the detector. After taking two of these projections at different object attitudes, they are properly processed and the final image is viewed by a red and green eyeglass. For monochromatic images this processing involves transformation of black and white radiographies into red and white and green and white ones, which are afterwards merged to yield a single image. All the processes are carried out with the software ImageJ. Divergence of the neutron beam unfortunately spoils both spatial and contrast resolutions, which become poorer as object-detector distance increases. Therefore, in order to evaluate the range of spatial resolution corresponding to the 3D image being observed, a curve expressing spatial resolution against object-detector gap has been deduced from the Modulation Transfer Functions experimentally. Typical exposure times, under a reactor power of 170 W, were 6 min for both quantitative and qualitative measurements. In spite of its intrinsic constraints, this simple technique may provide valuable information about the object otherwise available only through more refined and expensive 3D tomography.

  11. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  12. Apparatus and method to achieve high-resolution microscopy with non-diffracting or refracting radiation

    DOEpatents

    Tobin, Jr., Kenneth W.; Bingham, Philip R.; Hawari, Ayman I.

    2012-11-06

    An imaging system employing a coded aperture mask having multiple pinholes is provided. The coded aperture mask is placed at a radiation source to pass the radiation through. The radiation impinges on, and passes through an object, which alters the radiation by absorption and/or scattering. Upon passing through the object, the radiation is detected at a detector plane to form an encoded image, which includes information on the absorption and/or scattering caused by the material and structural attributes of the object. The encoded image is decoded to provide a reconstructed image of the object. Because the coded aperture mask includes multiple pinholes, the radiation intensity is greater than a comparable system employing a single pinhole, thereby enabling a higher resolution. Further, the decoding of the encoded image can be performed to generate multiple images of the object at different distances from the detector plane. Methods and programs for operating the imaging system are also disclosed.

  13. Man-made objects cuing in satellite imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2009-01-01

    We present a multi-scale framework for man-made structures cuing in satellite image regions. The approach is based on a hierarchical image segmentation followed by structural analysis. A hierarchical segmentation produces an image pyramid that contains a stack of irregular image partitions, represented as polygonized pixel patches, of successively reduced levels of detail (LOOs). We are jumping off from the over-segmented image represented by polygons attributed with spectral and texture information. The image is represented as a proximity graph with vertices corresponding to the polygons and edges reflecting polygon relations. This is followed by the iterative graph contraction based on Boruvka'smore » Minimum Spanning Tree (MST) construction algorithm. The graph contractions merge the patches based on their pairwise spectral and texture differences. Concurrently with the construction of the irregular image pyramid, structural analysis is done on the agglomerated patches. Man-made object cuing is based on the analysis of shape properties of the constructed patches and their spatial relations. The presented framework can be used as pre-scanning tool for wide area monitoring to quickly guide the further analysis to regions of interest.« less

  14. Detecting overlapping instances in microscopy images using extremal region trees.

    PubMed

    Arteta, Carlos; Lempitsky, Victor; Noble, J Alison; Zisserman, Andrew

    2016-01-01

    In many microscopy applications the images may contain both regions of low and high cell densities corresponding to different tissues or colonies at different stages of growth. This poses a challenge to most previously developed automated cell detection and counting methods, which are designed to handle either the low-density scenario (through cell detection) or the high-density scenario (through density estimation or texture analysis). The objective of this work is to detect all the instances of an object of interest in microscopy images. The instances may be partially overlapping and clustered. To this end we introduce a tree-structured discrete graphical model that is used to select and label a set of non-overlapping regions in the image by a global optimization of a classification score. Each region is labeled with the number of instances it contains - for example regions can be selected that contain two or three object instances, by defining separate classes for tuples of objects in the detection process. We show that this formulation can be learned within the structured output SVM framework and that the inference in such a model can be accomplished using dynamic programming on a tree structured region graph. Furthermore, the learning only requires weak annotations - a dot on each instance. The candidate regions for the selection are obtained as extremal region of a surface computed from the microscopy image, and we show that the performance of the model can be improved by considering a proxy problem for learning the surface that allows better selection of the extremal regions. Furthermore, we consider a number of variations for the loss function used in the structured output learning. The model is applied and evaluated over six quite disparate data sets of images covering: fluorescence microscopy, weak-fluorescence molecular images, phase contrast microscopy and histopathology images, and is shown to exceed the state of the art in performance. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. The fundamentals of average local variance--Part I: Detecting regular patterns.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    The method of average local variance (ALV) computes the mean of the standard deviation values derived for a 3 x 3 moving window on a successively coarsened image to produce a function of ALV versus spatial resolution. In developing ALV, the authors used approximately a doubling of the pixel size at each coarsening of the image. They hypothesized that ALV is low when the pixel size is smaller than the size of scene objects because the pixels on the object will have similar response values. When the pixel and objects are of similar size, they will tend to vary in response and the ALV values will increase. As the size of pixels increase further, more objects will be contained in a single pixel and ALV will decrease. The authors showed that various cover types produced single peak ALV functions that inexplicitly peaked when the pixel size was 1/2 to 3/4 of the object size. This paper reports on work done to explore the characteristics of the various forms of the ALV function and to understand the location of the peaks that occur in this function. The work was conducted using synthetically generated image data. The investigation showed that the hypothesis originally proposed in is not adequate. A new hypothesis is proposed that the ALV function has peak locations that are related to the geometric size of pattern structures in the scene. These structures are not always the same as scene objects. Only in cases where the size of and separation between scene objects are equal does the ALV function detect the size of the objects. In situations where the distance between scene objects are larger than their size, the ALV function has a peak at the object separation, not at the object size. This work has also shown that multiple object structures of different sizes and distances in the image provide multiple peaks in the ALV function and that some of these structures are not implicitly recognized as such from our perspective. However, the magnitude of these peaks depends on the response mix in the structures, complicating their interpretation and analysis. The analysis of the ALV Function is, thus, more complex than that generally reported in the literature.

  16. Three-dimensional reconstruction with x-ray shape-from-silhouette

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Ratti, F.; Calliari, I.; Poletto, L.

    2010-09-01

    In the field of restoration of ancient handworks, X-ray tomography is a powerful method to reconstruct the internal structure of the object in non-invasive way. In some cases, such as small objects fully realized with hard metals and completely hidden by clay or products of oxidation, the tomography, although necessary to obtain the 3D appearance of the object, does not give any additional information on its internal monolithic structure. We present here the application of the shape-from-silhouette technique on X-ray images to reconstruct the 3D profile of handworks. The acquisition technique is similar to tomography, since several X-ray images are taken while the object is rotated. Some reference points are placed on a structure co-rotating with the object and are acquired on the images for calibration and registration. The shape-from-silhouette algorithm gives finally the 3D appearance of the handwork. We present the analysis of a tin pendant of VI-VIII century b.C. (Venetian area) completely hidden by solid ground. The 3D reconstruction shows surprisingly that the pendant is a very elaborated piece, with two embraced figures that were completely invisible before restoration.

  17. Method and apparatus for detecting internal structures of bulk objects using acoustic imaging

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2002-01-01

    Apparatus for producing an acoustic image of an object according to the present invention may comprise an excitation source for vibrating the object to produce at least one acoustic wave therein. The acoustic wave results in the formation of at least one surface displacement on the surface of the object. A light source produces an optical object wavefront and an optical reference wavefront and directs the optical object wavefront toward the surface of the object to produce a modulated optical object wavefront. A modulator operatively associated with the optical reference wavefront modulates the optical reference wavefront in synchronization with the acoustic wave to produce a modulated optical reference wavefront. A sensing medium positioned to receive the modulated optical object wavefront and the modulated optical reference wavefront combines the modulated optical object and reference wavefronts to produce an image related to the surface displacement on the surface of the object. A detector detects the image related to the surface displacement produced by the sensing medium. A processing system operatively associated with the detector constructs an acoustic image of interior features of the object based on the phase and amplitude of the surface displacement on the surface of the object.

  18. Interactive High-Relief Reconstruction for Organic and Double-Sided Objects from a Photo.

    PubMed

    Yeh, Chih-Kuo; Huang, Shi-Yang; Jayaraman, Pradeep Kumar; Fu, Chi-Wing; Lee, Tong-Yee

    2017-07-01

    We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.

  19. Estimation of object motion parameters from noisy images.

    PubMed

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  20. Combining 3D structure of real video and synthetic objects

    NASA Astrophysics Data System (ADS)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  1. Parallel object-oriented data mining system

    DOEpatents

    Kamath, Chandrika; Cantu-Paz, Erick

    2004-01-06

    A data mining system uncovers patterns, associations, anomalies and other statistically significant structures in data. Data files are read and displayed. Objects in the data files are identified. Relevant features for the objects are extracted. Patterns among the objects are recognized based upon the features. Data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) sky survey was used to search for bent doubles. This test was conducted on data from the Very Large Array in New Mexico which seeks to locate a special type of quasar (radio-emitting stellar object) called bent doubles. The FIRST survey has generated more than 32,000 images of the sky to date. Each image is 7.1 megabytes, yielding more than 100 gigabytes of image data in the entire data set.

  2. A novel lobster-eye imaging system based on Schmidt-type objective for X-ray-backscattering inspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jie; Wang, Xin, E-mail: wangx@tongji.edu.cn, E-mail: mubz@tongji.edu.cn; Zhan, Qi

    This paper presents a novel lobster-eye imaging system for X-ray-backscattering inspection. The system was designed by modifying the Schmidt geometry into a treble-lens structure in order to reduce the resolution difference between the vertical and horizontal directions, as indicated by ray-tracing simulations. The lobster-eye X-ray imaging system is capable of operating over a wide range of photon energies up to 100 keV. In addition, the optics of the lobster-eye X-ray imaging system was tested to verify that they meet the requirements. X-ray-backscattering imaging experiments were performed in which T-shaped polymethyl-methacrylate objects were imaged by the lobster-eye X-ray imaging system basedmore » on both the double-lens and treble-lens Schmidt objectives. The results show similar resolution of the treble-lens Schmidt objective in both the vertical and horizontal directions. Moreover, imaging experiments were performed using a second treble-lens Schmidt objective with higher resolution. The results show that for a field of view of over 200 mm and with a 500 mm object distance, this lobster-eye X-ray imaging system based on a treble-lens Schmidt objective offers a spatial resolution of approximately 3 mm.« less

  3. A novel lobster-eye imaging system based on Schmidt-type objective for X-ray-backscattering inspection

    NASA Astrophysics Data System (ADS)

    Xu, Jie; Wang, Xin; Zhan, Qi; Huang, Shengling; Chen, Yifan; Mu, Baozhong

    2016-07-01

    This paper presents a novel lobster-eye imaging system for X-ray-backscattering inspection. The system was designed by modifying the Schmidt geometry into a treble-lens structure in order to reduce the resolution difference between the vertical and horizontal directions, as indicated by ray-tracing simulations. The lobster-eye X-ray imaging system is capable of operating over a wide range of photon energies up to 100 keV. In addition, the optics of the lobster-eye X-ray imaging system was tested to verify that they meet the requirements. X-ray-backscattering imaging experiments were performed in which T-shaped polymethyl-methacrylate objects were imaged by the lobster-eye X-ray imaging system based on both the double-lens and treble-lens Schmidt objectives. The results show similar resolution of the treble-lens Schmidt objective in both the vertical and horizontal directions. Moreover, imaging experiments were performed using a second treble-lens Schmidt objective with higher resolution. The results show that for a field of view of over 200 mm and with a 500 mm object distance, this lobster-eye X-ray imaging system based on a treble-lens Schmidt objective offers a spatial resolution of approximately 3 mm.

  4. Embodied memory allows accurate and stable perception of hidden objects despite orientation change.

    PubMed

    Pan, Jing Samantha; Bingham, Ned; Bingham, Geoffrey P

    2017-07-01

    Rotating a scene in a frontoparallel plane (rolling) yields a change in orientation of constituent images. When using only information provided by static images to perceive a scene after orientation change, identification performance typically decreases (Rock & Heimer, 1957). However, rolling generates optic flow information that relates the discrete, static images (before and after the change) and forms an embodied memory that aids recognition. The embodied memory hypothesis predicts that upon detecting a continuous spatial transformation of image structure, or in other words, seeing the continuous rolling process and objects undergoing rolling observers should accurately perceive objects during and after motion. Thus, in this case, orientation change should not affect performance. We tested this hypothesis in three experiments and found that (a) using combined optic flow and image structure, participants identified locations of previously perceived but currently occluded targets with great accuracy and stability (Experiment 1); (b) using combined optic flow and image structure information, participants identified hidden targets equally well with or without 30° orientation changes (Experiment 2); and (c) when the rolling was unseen, identification of hidden targets after orientation change became worse (Experiment 3). Furthermore, when rolling was unseen, although target identification was better when participants were told about the orientation change than when they were not told, performance was still worse than when there was no orientation change. Therefore, combined optic flow and image structure information, not mere knowledge about the rolling, enables accurate and stable perception despite orientation change. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, Heung-Rae

    1997-01-01

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.

  6. Fourier Plane Image Combination by Feathering

    NASA Astrophysics Data System (ADS)

    Cotton, W. D.

    2017-09-01

    Astronomical objects frequently exhibit structure over a wide range of scales whereas many telescopes, especially interferometer arrays, only sample a limited range of spatial scales. To properly image these objects, images from a set of instruments covering the range of scales may be needed. These images then must be combined in a manner to recover all spatial scales. This paper describes the feathering technique for image combination in the Fourier transform plane. Implementations in several packages are discussed and example combinations of single dish and interferometric observations of both simulated and celestial radio emission are given.

  7. Recognizing 3 D Objects from 2D Images Using Structural Knowledge Base of Genetic Views

    DTIC Science & Technology

    1988-08-31

    technical report. [BIE85] I. Biederman , "Human image understanding: Recent research and a theory", Computer Vision, Graphics, and Image Processing, vol...model bases", Technical Report 87-85, COINS Dept, University of Massachusetts, Amherst, MA 01003, August 1987 . [BUR87b) Burns, J. B. and L. J. Kitchen...34Recognition in 2D images of 3D objects from large model bases using prediction hierarchies", Proc. IJCAI-10, 1987 . [BUR891 J. B. Burns, forthcoming

  8. Ruby-Helix: an implementation of helical image processing based on object-oriented scripting language.

    PubMed

    Metlagel, Zoltan; Kikkawa, Yayoi S; Kikkawa, Masahide

    2007-01-01

    Helical image analysis in combination with electron microscopy has been used to study three-dimensional structures of various biological filaments or tubes, such as microtubules, actin filaments, and bacterial flagella. A number of packages have been developed to carry out helical image analysis. Some biological specimens, however, have a symmetry break (seam) in their three-dimensional structure, even though their subunits are mostly arranged in a helical manner. We refer to these objects as "asymmetric helices". All the existing packages are designed for helically symmetric specimens, and do not allow analysis of asymmetric helical objects, such as microtubules with seams. Here, we describe Ruby-Helix, a new set of programs for the analysis of "helical" objects with or without a seam. Ruby-Helix is built on top of the Ruby programming language and is the first implementation of asymmetric helical reconstruction for practical image analysis. It also allows easier and semi-automated analysis, performing iterative unbending and accurate determination of the repeat length. As a result, Ruby-Helix enables us to analyze motor-microtubule complexes with higher throughput to higher resolution.

  9. Stochastic HKMDHE: A multi-objective contrast enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.

  10. Spatial and symbolic queries for 3D image data

    NASA Astrophysics Data System (ADS)

    Benson, Daniel C.; Zick, Gregory L.

    1992-04-01

    We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.

  11. Structured-illumination reflectance imaging (SIRI) for enhanced detection of fresh bruises in apples

    USDA-ARS?s Scientific Manuscript database

    A structured-illumination reflectance imaging technique was developed for the detection of fresh bruises in apples. Experiments were first conducted on a strongly scattering nylon sample embedded with foreign objects of different sizes at different depths, and then on apples of two different cultiva...

  12. What Images Reveal: a Comparative Study of Science Images between Australian and Taiwanese Junior High School Textbooks

    NASA Astrophysics Data System (ADS)

    Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua; Chang, Huey-Por

    2017-07-01

    From a social semiotic perspective, image designs in science textbooks are inevitably influenced by the sociocultural context in which the books are produced. The learning environments of Australia and Taiwan vary greatly. Drawing on social semiotics and cognitive science, this study compares classificational images in Australian and Taiwanese junior high school science textbooks. Classificational images are important kinds of images, which can represent taxonomic relations among objects as reported by Kress and van Leeuwen (Reading images: the grammar of visual design, 2006). An analysis of the images from sample chapters in Australian and Taiwanese high school science textbooks showed that the majority of the Taiwanese images are covert taxonomies, which represent hierarchical relations implicitly. In contrast, Australian classificational images included diversified designs, but particularly types with a tree structure which depicted overt taxonomies, explicitly representing hierarchical super-ordinate and subordinate relations. Many of the Taiwanese images are reminiscent of the specimen images in eighteenth century science texts representing "what truly is", while more Australian images emphasize structural objectivity. Moreover, Australian images support cognitive functions which facilitate reading comprehension. The relationships between image designs and learning environments are discussed and implications for textbook research and design are addressed.

  13. Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research.

    PubMed

    Ercius, Peter; Alaidi, Osama; Rames, Matthew J; Ren, Gang

    2015-10-14

    Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is a technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. This review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research

    PubMed Central

    Alaidi, Osama; Rames, Matthew J.

    2016-01-01

    Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is a technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. This review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. PMID:26087941

  15. Motion estimation of subcellular structures from fluorescence microscopy images.

    PubMed

    Vallmitjana, A; Civera-Tregon, A; Hoenicka, J; Palau, F; Benitez, R

    2017-07-01

    We present an automatic image processing framework to study moving intracellular structures from live cell fluorescence microscopy. The system includes the identification of static and dynamic structures from time-lapse images using data clustering as well as the identification of the trajectory of moving objects with a probabilistic tracking algorithm. The method has been successfully applied to study mitochondrial movement in neurons. The approach provides excellent performance under different experimental conditions and is robust to common sources of noise including experimental, molecular and biological fluctuations.

  16. Tracking multiple particles in fluorescence time-lapse microscopy images via probabilistic data association.

    PubMed

    Godinez, William J; Rohr, Karl

    2015-02-01

    Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.

  17. Vibration mode imaging.

    PubMed

    Zhang, Xiaoming; Zeraati, Mohammad; Kinnick, Randall R; Greenleaf, James F; Fatemi, Mostafa

    2007-06-01

    A new method for imaging the vibration mode of an object is investigated. The radiation force of ultrasound is used to scan the object at a resonant frequency of the object. The vibration of the object is measured by laser and the resulting acoustic emission from the object is measured by a hydrophone. It is shown that the measured signal is proportional to the value of the mode shape at the focal point of the ultrasound beam. Experimental studies are carried out on a mechanical heart valve and arterial phantoms. The mode images on the valve are made by the hydrophone measurement and confirmed by finite-element method simulations. Compared with conventional B-scan imaging on arterial phantoms, the mode imaging can show not only the interface of the artery and the gelatin, but also the vibration modes of the artery. The images taken on the phantom surface suggest that an image of an interior artery can be made by vibration measurements on the surface of the body. However, the image of the artery can be improved if the vibration of the artery is measured directly. Imaging of the structure in the gelatin or tissue can be enhanced by small bubbles and contrast agents.

  18. Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text.

    PubMed

    Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco

    2015-10-15

    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Image variance and spatial structure in remotely sensed scenes. [South Dakota, California, Missouri, Kentucky, Louisiana, Tennessee, District of Columbia, and Oregon

    NASA Technical Reports Server (NTRS)

    Woodcock, C. E.; Strahler, A. H.

    1984-01-01

    Digital images derived by scanning air photos and through acquiring aircraft and spcecraft scanner data were studied. Results show that spatial structure in scenes can be measured and logically related to texture and image variance. Imagery data were used of a South Dakota forest; a housing development in Canoga Park, California; an agricltural area in Mississppi, Louisiana, Kentucky, and Tennessee; the city of Washington, D.C.; and the Klamath National Forest. Local variance, measured as the average standard deviation of brightness values within a three-by-three moving window, reaches a peak at a resolution cell size about two-thirds to three-fourths the size of the objects within the scene. If objects are smaller than the resolution cell size of the image, this peak does not occur and local variance simply decreases with increasing resolution as spatial averaging occurs. Variograms can also reveal the size, shape, and density of objects in the scene.

  20. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  1. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  2. Bayesian Multiscale Modeling of Closed Curves in Point Clouds

    PubMed Central

    Gu, Kelvin; Pati, Debdeep; Dunson, David B.

    2014-01-01

    Modeling object boundaries based on image or point cloud data is frequently necessary in medical and scientific applications ranging from detecting tumor contours for targeted radiation therapy, to the classification of organisms based on their structural information. In low-contrast images or sparse and noisy point clouds, there is often insufficient data to recover local segments of the boundary in isolation. Thus, it becomes critical to model the entire boundary in the form of a closed curve. To achieve this, we develop a Bayesian hierarchical model that expresses highly diverse 2D objects in the form of closed curves. The model is based on a novel multiscale deformation process. By relating multiple objects through a hierarchical formulation, we can successfully recover missing boundaries by borrowing structural information from similar objects at the appropriate scale. Furthermore, the model’s latent parameters help interpret the population, indicating dimensions of significant structural variability and also specifying a ‘central curve’ that summarizes the collection. Theoretical properties of our prior are studied in specific cases and efficient Markov chain Monte Carlo methods are developed, evaluated through simulation examples and applied to panorex teeth images for modeling teeth contours and also to a brain tumor contour detection problem. PMID:25544786

  3. Learning-based stochastic object models for use in optimizing imaging systems

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua

    2017-03-01

    It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.

  4. Combining Automatic Tube Current Modulation with Adaptive Statistical Iterative Reconstruction for Low-Dose Chest CT Screening

    PubMed Central

    Chen, Jiang-Hong; Jin, Er-Hu; He, Wen; Zhao, Li-Qin

    2014-01-01

    Objective To reduce radiation dose while maintaining image quality in low-dose chest computed tomography (CT) by combining adaptive statistical iterative reconstruction (ASIR) and automatic tube current modulation (ATCM). Methods Patients undergoing cancer screening (n = 200) were subjected to 64-slice multidetector chest CT scanning with ASIR and ATCM. Patients were divided into groups 1, 2, 3, and 4 (n = 50 each), with a noise index (NI) of 15, 20, 30, and 40, respectively. Each image set was reconstructed with 4 ASIR levels (0% ASIR, 30% ASIR, 50% ASIR, and 80% ASIR) in each group. Two radiologists assessed subjective image noise, image artifacts, and visibility of the anatomical structures. Objective image noise and signal-to-noise ratio (SNR) were measured, and effective dose (ED) was recorded. Results Increased NI was associated with increased subjective and objective image noise results (P<0.001), and SNR decreased with increasing NI (P<0.001). These values improved with increased ASIR levels (P<0.001). Images from all 4 groups were clinically diagnosable. Images with NI = 30 and 50% ASIR had average subjective image noise scores and nearly average anatomical structure visibility scores, with a mean objective image noise of 23.42 HU. The EDs for groups 1, 2, 3 and 4 were 2.79±1.17, 1.69±0.59, 0.74±0.29, and 0.37±0.22 mSv, respectively. Compared to group 1 (NI = 15), the ED reductions were 39.43%, 73.48%, and 86.74% for groups 2, 3, and 4, respectively. Conclusions Using NI = 30 with 50% ASIR in the chest CT protocol, we obtained average or above-average image quality but a reduced ED. PMID:24691208

  5. Terahertz holography for imaging amplitude and phase objects.

    PubMed

    Hack, Erwin; Zolliker, Peter

    2014-06-30

    A non-monochromatic THz Quantum Cascade Laser and an uncooled micro-bolometer array detector with VGA resolution are used in a beam-splitter free holographic set-up to measure amplitude and phase objects in transmission. Phase maps of the diffraction pattern are retrieved using the Fourier transform carrier fringe method; while a Fresnel-Kirchhoff back propagation algorithm is used to reconstruct the complex object image. A lateral resolution of 280 µm and a relative phase sensitivity of about 0.5 rad are estimated from reconstructed images of a metallic Siemens star and a polypropylene test structure, respectively. Simulations corroborate the experimental results.

  6. Iterative optimizing quantization method for reconstructing three-dimensional images from a limited number of views

    DOEpatents

    Lee, H.R.

    1997-11-18

    A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.

  7. Tomographic image via background subtraction using an x-ray projection image and a priori computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Jin; Yi Byongyong; Lasio, Giovanni

    Kilovoltage x-ray projection images (kV images for brevity) are increasingly available in image guided radiotherapy (IGRT) for patient positioning. These images are two-dimensional (2D) projections of a three-dimensional (3D) object along the x-ray beam direction. Projecting a 3D object onto a plane may lead to ambiguities in the identification of anatomical structures and to poor contrast in kV images. Therefore, the use of kV images in IGRT is mainly limited to bony landmark alignments. This work proposes a novel subtraction technique that isolates a slice of interest (SOI) from a kV image with the assistance of a priori information frommore » a previous CT scan. The method separates structural information within a preselected SOI by suppressing contributions to the unprocessed projection from out-of-SOI-plane structures. Up to a five-fold increase in the contrast-to-noise ratios (CNRs) was observed in selected regions of the isolated SOI, when compared to the original unprocessed kV image. The tomographic image via background subtraction (TIBS) technique aims to provide a quick snapshot of the slice of interest with greatly enhanced image contrast over conventional kV x-ray projections for fast and accurate image guidance of radiation therapy. With further refinements, TIBS could, in principle, provide real-time tumor localization using gantry-mounted x-ray imaging systems without the need for implanted markers.« less

  8. X-Ray Backscatter Imaging for Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Shedlock, Daniel; Edwards, Talion; Toh, Chin

    2011-06-01

    Scatter x-ray imaging (SXI) is a real time, digital, x-ray backscatter imaging technique that allows radiographs to be taken from one side of an object. This x-ray backscatter imaging technique offers many advantages over conventional transmission radiography that include single-sided access and extremely low radiation fields compared to conventional open source industrial radiography. Examples of some applications include the detection of corrosion, foreign object debris, water intrusion, cracking, impact damage and leak detection in a variety of material such as aluminum, composites, honeycomb structures, and titanium.

  9. A simple and low-cost structured illumination microscopy using a pico-projector

    NASA Astrophysics Data System (ADS)

    Özgürün, Baturay

    2018-02-01

    Here, development of a low-cost structured illumination microscopy (SIM) based on a pico-projector is presented. The pico-projector consists of independent red, green and blue LEDs that remove need for an external illumination source. Moreover, display element of the pico-projector serves as a pattern generating spatial light modulator. A simple lens group is employed to couple light from the projector to an epi-illumination port of a commercial microscope system. 2D sub SIM images are acquired and synthesized to surpass the diffraction limit using 40x (0.75 NA) objective. Resolution of the reconstructed SIM images is verified with a dye-and-object object and a fixed cell sample.

  10. Structural investigation of the Grenville Province by radar and other imaging and nonimaging sensors

    NASA Technical Reports Server (NTRS)

    Lowman, P. D., Jr.; Blodget, H. W.; Webster, W. J., Jr.; Paia, S.; Singhroy, V. H.; Slaney, V. R.

    1984-01-01

    The structural investigation of the Canadian Shield by orbital radar and LANDSAT, is outlined. The area includes parts of the central metasedimentary belt and the Ontario gneiss belt, and major structures as well-expressed topographically. The primary objective is to apply SIR-B data to the mapping of this key part of the Grenville orogen, specifically ductile fold structures and associated features, and igneous, metamorphic, and sedimentary rock (including glacial and recent sediments). Secondary objectives are to support the Canadian RADARSAT project by evaluating the baseline parameters of a Canadian imaging radar satellite planned for late in the decade. The baseline parameters include optimum incidence and azimuth angles. The experiment is to develop techniques for the use of multiple data sets.

  11. A study on high NA and evanescent imaging with polarized illumination

    NASA Astrophysics Data System (ADS)

    Yang, Seung-Hune

    Simulation techniques are developed for high NA polarized microscopy with Babinet's principle, partial coherence and vector diffraction for non-periodic geometries. A mathematical model for the Babinet approach is developed and interpreted. Simulation results of the Babinet's principle approach are compared with those of Rigorous Coupled Wave Theory (RCWT) for periodic structures to investigate the accuracy of this approach and its limitations. A microscope system using a special solid immersion lens (SIL) is introduced to image Blu-Ray (BD) optical disc samples without removing the protective cover layer. Aberration caused by the cover layer is minimized with a truncated SIL. Sub-surface imaging simulation is achieved by RCWT, partial coherence, vector diffraction and Babinet's Principle. Simulated results are compared with experimental images and atomic force microscopy (AFM) measurement. A technique for obtaining native and induced using a significant amount of evanescent energy is described for a solid immersion lens (SIL) microscope. Characteristics of native and induced polarization images for different object structures and materials are studied in detail. Experiments are conducted with a NA = 1.48 at lambda = 550nm microscope. Near-field images are simulated and analyzed with an RCWT approach. Contrast curve versus object spatial frequency calculations are compared with experimental measurements. Dependencies of contrast versus source polarization angles and air gap for native and induced polarization image profiles are evaluated. By using the relationship between induced polarization and topographical structure, an induced polarization image of an alternating phase shift mask (PSM) is converted into a topographical image, which shows very good agreement with AFM measurement. Images of other material structures include a dielectric grating, chrome-on-glass grating, silicon CPU structure, BD-R and BD-ROM.

  12. An active seismic experiment at Tenerife Island (Canary Island, Spain): Imaging an active volcano edifice

    NASA Astrophysics Data System (ADS)

    Garcia-Yeguas, A.; Ibañez, J. M.; Rietbrock, A.; Tom-Teidevs, G.

    2008-12-01

    An active seismic experiment to study the internal structure of Teide Volcano was carried out on Tenerife, a volcanic island in Spain's Canary Islands. The main objective of the TOM-TEIDEVS experiment is to obtain a 3-dimensional structural image of Teide Volcano using seismic tomography and seismic reflection/refraction imaging techniques. At present, knowledge of the deeper structure of Teide and Tenerife is very limited, with proposed structural models mainly based on sparse geophysical and geological data. This multinational experiment which involves institutes from Spain, Italy, the United Kingdom, Ireland, and Mexico will generate a unique high resolution structural image of the active volcano edifice and will further our understanding of volcanic processes.

  13. Imaging an Active Volcano Edifice at Tenerife Island, Spain

    NASA Astrophysics Data System (ADS)

    Ibáñez, Jesús M.; Rietbrock, Andreas; García-Yeguas, Araceli

    2008-08-01

    An active seismic experiment to study the internal structure of Teide volcano is being carried out on Tenerife, a volcanic island in Spain's Canary Islands archipelago. The main objective of the Tomography at Teide Volcano Spain (TOM-TEIDEVS) experiment, begun in January 2007, is to obtain a three-dimensional (3-D) structural image of Teide volcano using seismic tomography and seismic reflection/refraction imaging techniques. At present, knowledge of the deeper structure of Teide and Tenerife is very limited, with proposed structural models based mainly on sparse geophysical and geological data. The multinational experiment-involving institutes from Spain, the United Kingdom, Italy, Ireland, and Mexico-will generate a unique high-resolution structural image of the active volcano edifice and will further our understanding of volcanic processes.

  14. Mental visualization of objects from cross-sectional images

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2011-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386

  15. Perceptually relevant grouping of image tokens on the basis of constraint propagation from local binary patterns

    NASA Astrophysics Data System (ADS)

    Behlim, Sadaf Iqbal; Syed, Tahir Qasim; Malik, Muhammad Yameen; Vigneron, Vincent

    2016-11-01

    Grouping image tokens is an intermediate step needed to arrive at meaningful image representation and summarization. Usually, perceptual cues, for instance, gestalt properties inform token grouping. However, they do not take into account structural continuities that could be derived from other tokens belonging to similar structures irrespective of their location. We propose an image representation that encodes structural constraints emerging from local binary patterns (LBP), which provides a long-distance measure of similarity but in a structurally connected way. Our representation provides a grouping of pixels or larger image tokens that is free of numeric similarity measures and could therefore be extended to nonmetric spaces. The representation lends itself nicely to ubiquitous image processing applications such as connected component labeling and segmentation. We test our proposed representation on the perceptual grouping or segmentation task on the popular Berkeley segmentation dataset (BSD500) that with respect to human segmented images achieves an average F-measure of 0.559. Our algorithm achieves a high average recall of 0.787 and is therefore well-suited to other applications such as object retrieval and category-independent object recognition. The proposed merging heuristic based on levels of singular tree component has shown promising results on the BSD500 dataset and currently ranks 12th among all benchmarked algorithms, but contrary to the others, it requires no data-driven training or specialized preprocessing.

  16. Hybrid-coded 3D structured illumination imaging with Bayesian estimation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Chen, Hsi-Hsun; Luo, Yuan; Singh, Vijay R.

    2016-03-01

    Light induced fluorescent microscopy has long been developed to observe and understand the object at microscale, such as cellular sample. However, the transfer function of lense-based imaging system limits the resolution so that the fine and detailed structure of sample cannot be identified clearly. The techniques of resolution enhancement are fascinated to break the limit of resolution for objective given. In the past decades, the resolution enhancement imaging has been investigated through variety of strategies, including photoactivated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), stimulated emission depletion (STED), and structure illuminated microscopy (SIM). In those methods, only SIM can intrinsically improve the resolution limit for a system without taking the structure properties of object into account. In this paper, we develop a SIM associated with Bayesian estimation, furthermore, with optical sectioning capability rendered from HiLo processing, resulting the high resolution through 3D volume. This 3D SIM can provide the optical sectioning and resolution enhancement performance, and be robust to noise owing to the Data driven Bayesian estimation reconstruction proposed. For validating the 3D SIM, we show our simulation result of algorithm, and the experimental result demonstrating the 3D resolution enhancement.

  17. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.

  18. Generating standardized image data for testing and calibrating quantification of volumes, surfaces, lengths, and object counts in fibrous and porous materials using X-ray microtomography.

    PubMed

    Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk

    2018-06-01

    Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.

  19. A theory of phase singularities for image representation and its applications to object tracking and image matching.

    PubMed

    Qiao, Yu; Wang, Wei; Minematsu, Nobuaki; Liu, Jianzhuang; Takeda, Mitsuo; Tang, Xiaoou

    2009-10-01

    This paper studies phase singularities (PSs) for image representation. We show that PSs calculated with Laguerre-Gauss filters contain important information and provide a useful tool for image analysis. PSs are invariant to image translation and rotation. We introduce several invariant features to characterize the core structures around PSs and analyze the stability of PSs to noise addition and scale change. We also study the characteristics of PSs in a scale space, which lead to a method to select key scales along phase singularity curves. We demonstrate two applications of PSs: object tracking and image matching. In object tracking, we use the iterative closest point algorithm to determine the correspondences of PSs between two adjacent frames. The use of PSs allows us to precisely determine the motions of tracked objects. In image matching, we combine PSs and scale-invariant feature transform (SIFT) descriptor to deal with the variations between two images and examine the proposed method on a benchmark database. The results indicate that our method can find more correct matching pairs with higher repeatability rates than some well-known methods.

  20. Fibre optic confocal imaging (FOCI) for subsurface microscopy of the colon in vivo.

    PubMed Central

    Delaney, P M; King, R G; Lambert, J R; Harris, M R

    1994-01-01

    Fibre optic confocal imaging (FOCI) is a new type of microscopy which has been recently developed (Delaney et al. 1993). In contrast to conventional light microscopy, FOCI and other confocal techniques allow clear imaging of subsurface structures within translucent objects. However, unlike conventional confocal microscopes which are bulky (because of a need for accurate alignment of large components) FOCI allows the imaging end to be miniaturised and relatively mobile. FOCI is thus particularly suited for clear subsurface imaging of structures within living animals or subjects. The aim of the present study was to assess the suitability of using FOCI for imaging of subsurface structures within the colon, both in vitro (human and rat biopsies) and in vivo (in rats). Images were obtained in fluorescence mode (excitation 488 nm, detection above 515 nm) following topical application of fluorescein. By this technique the glandular structure of the colon was imaged. FOCI is thus suitable for subsurface imaging of the colon in vivo. Images Fig. 2 Fig. 3 PMID:8157487

  1. Structural Neuroimaging in Adolescents with a First Psychotic Episode

    ERIC Educational Resources Information Center

    Moreno, Dolores; Burdalo, Maite; Reig, Santiago; Parellada, Mara; Zabala, Arantzazu; Desco, Manuel; Baca-Baldomero, Enrique; Arango, Celso

    2005-01-01

    Objective: The objective of the present study is to replicate findings in first-episode psychosis reporting a smaller volume in brain structures in a population with adolescent onset. Method: Magnetic resonance imaging studies were performed on 23 psychotic adolescents (12-18 years old, 17 males, 6 females) consecutively admitted to an adolescent…

  2. Segmentation of white rat sperm image

    NASA Astrophysics Data System (ADS)

    Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan

    2011-11-01

    The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.

  3. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  4. Toward Imaging of Small Objects with XUV Radiation

    NASA Astrophysics Data System (ADS)

    Sayrac, Muhammed; Kolomenski, Alexandre A.; Boran, Yakup; Schuessler, Hans

    The coherent diffraction imaging (CDI) technique has the potential to capture high resolution images of nano- or micron-sized structures when using XUV radiation obtained by high harmonic radiation (HHG) process. When a small object is exposed to XUV radiation, a diffraction pattern of the object is created. The advances in the coherent HHG enable obtaining photon flux sufficient for XUV imaging. The diffractive imaging technique from coherent table top XUV beams have made possible nanometer-scale resolution imaging by replacing the imaging optics with a computer reconstruction algorithm. In this study, we present our initial work on diffractive imaging using a tabletop XUV source. The initial investigation of imaging of a micron-sized mesh with an optimized HHG source is demonstrated. This work was supported in part by the Robert A. Welch Foundation Grant No. A1546 and the Qatar Foundation under the grant NPRP 8-735-1-154. M. Sayrac acknowledges support from the Ministry of National Education of the Republic of Turkey.

  5. Detecting objects in radiographs for homeland security

    NASA Astrophysics Data System (ADS)

    Prasad, Lakshman; Snyder, Hans

    2005-05-01

    We present a general scheme for segmenting a radiographic image into polygons that correspond to visual features. This decomposition provides a vectorized representation that is a high-level description of the image. The polygons correspond to objects or object parts present in the image. This characterization of radiographs allows the direct application of several shape recognition algorithms to identify objects. In this paper we describe the use of constrained Delaunay triangulations as a uniform foundational tool to achieve multiple visual tasks, namely image segmentation, shape decomposition, and parts-based shape matching. Shape decomposition yields parts that serve as tokens representing local shape characteristics. Parts-based shape matching enables the recognition of objects in the presence of occlusions, which commonly occur in radiographs. The polygonal representation of image features affords the efficient design and application of sophisticated geometric filtering methods to detect large-scale structural properties of objects in images. Finally, the representation of radiographs via polygons results in significant reduction of image file sizes and permits the scalable graphical representation of images, along with annotations of detected objects, in the SVG (scalable vector graphics) format that is proposed by the world wide web consortium (W3C). This is a textual representation that can be compressed and encrypted for efficient and secure transmission of information over wireless channels and on the Internet. In particular, our methods described here provide an algorithmic framework for developing image analysis tools for screening cargo at ports of entry for homeland security.

  6. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  7. Foreign object detection and removal to improve automated analysis of chest radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogeweg, Laurens; Sanchez, Clara I.; Melendez, Jaime

    2013-07-15

    Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The methodmore » is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A{sub z} value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis.« less

  8. Salient man-made structure detection in infrared images

    NASA Astrophysics Data System (ADS)

    Li, Dong-jie; Zhou, Fu-gen; Jin, Ting

    2013-09-01

    Target detection, segmentation and recognition is a hot research topic in the field of image processing and pattern recognition nowadays, among which salient area or object detection is one of core technologies of precision guided weapon. Many theories have been raised in this paper; we detect salient objects in a series of input infrared images by using the classical feature integration theory and Itti's visual attention system. In order to find the salient object in an image accurately, we present a new method to solve the edge blur problem by calculating and using the edge mask. We also greatly improve the computing speed by improving the center-surround differences method. Unlike the traditional algorithm, we calculate the center-surround differences through rows and columns separately. Experimental results show that our method is effective in detecting salient object accurately and rapidly.

  9. Real-Time Imaging of Plant Cell Wall Structure at Nanometer Scale, with Respect to Cellulase Accessibility and Degradation Kinetics (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, S. Y.

    Presentation on real-time imaging of plant cell wall structure at nanometer scale. Objectives are to develop tools to measure biomass at the nanometer scale; elucidate the molecular bases of biomass deconstruction; and identify factors that affect the conversion efficiency of biomass-to-biofuels.

  10. Particle detection, number estimation, and feature measurement in gene transfer studies: optical fractionator stereology integrated with digital image processing and analysis.

    PubMed

    King, Michael A; Scotty, Nicole; Klein, Ronald L; Meyer, Edwin M

    2002-10-01

    Assessing the efficacy of in vivo gene transfer often requires a quantitative determination of the number, size, shape, or histological visualization characteristics of biological objects. The optical fractionator has become a choice stereological method for estimating the number of objects, such as neurons, in a structure, such as a brain subregion. Digital image processing and analytic methods can increase detection sensitivity and quantify structural and/or spectral features located in histological specimens. We describe a hardware and software system that we have developed for conducting the optical fractionator process. A microscope equipped with a video camera and motorized stage and focus controls is interfaced with a desktop computer. The computer contains a combination live video/computer graphics adapter with a video frame grabber and controls the stage, focus, and video via a commercial imaging software package. Specialized macro programs have been constructed with this software to execute command sequences requisite to the optical fractionator method: defining regions of interest, positioning specimens in a systematic uniform random manner, and stepping through known volumes of tissue for interactive object identification (optical dissectors). The system affords the flexibility to work with count regions that exceed the microscope image field size at low magnifications and to adjust the parameters of the fractionator sampling to best match the demands of particular specimens and object types. Digital image processing can be used to facilitate object detection and identification, and objects that meet criteria for counting can be analyzed for a variety of morphometric and optical properties. Copyright 2002 Elsevier Science (USA)

  11. Multimodal and synthetic aperture approach to full-field 3D shape and displacement measurements

    NASA Astrophysics Data System (ADS)

    Kujawińska, M.; Sitnik, R.

    2017-08-01

    Recently most of the measurement tasks in industry, civil engineering and culture heritage applications require archiving, characterization and monitoring of 3D objects and structures and their performance under changing conditions. These requirements can be met if multimodal measurement (MM) strategy is applied. It rely on effective combining structured light method and 3D digital image correlation with laser scanning/ToF, thermal imaging, multispectral imaging and BDRF measurements. In the case of big size and/or complicated objects MM have to be combined with hierarchical or synthetic aperture (SA) measurements. The new solutions in MM and SA strategies are presented and their applicability is shown at interesting cultural heritage and civil engineering applications.

  12. Characterization of the Interior Density Structure of Near Earth Objects with Muons

    NASA Astrophysics Data System (ADS)

    Prettyman, T. H.; Sykes, M. V.; Miller, R. S.; Pinsky, L. S.; Empl, A.; Nolan, M. C.; Koontz, S. L.; Lawrence, D. J.; Mittlefehldt, D. W.; Reddell, B. D.

    2015-12-01

    Near Earth Objects (NEOs) are a diverse population of short-lived asteroids originating from the main belt and Jupiter family comets. Some have orbits that are easy to access from Earth, making them attractive as targets for science and exploration as well as a potential resource. Some pose a potential impact threat. NEOs have undergone extensive collisional processing, fragmenting and re-accreting to form rubble piles, which may be compositionally heterogeneous (e.g., like 2008 TC3, the precursor to Almahata Sitta). At present, little is known about their interior structure or how these objects are held together. The wide range of inferred NEO macroporosities hint at complex interiors. Information about their density structure would aid in understanding their formation and collisional histories, the risks they pose to human interactions with their surfaces, the constraints on industrial processing of NEO resources, and the selection of hazard mitigation strategies (e.g., kinetic impactor vs nuclear burst). Several methods have been proposed to characterize asteroid interiors, including radar imaging, seismic tomography, and muon imaging (muon radiography and tomography). Of these, only muon imaging has the potential to determine interior density structure, including the relative density of constituent fragments. Muons are produced by galactic cosmic ray showers within the top meter of asteroid surfaces. High-energy muons can traverse large distances through rock with little deflection. Muons transmitted through an Itokawa-sized asteroid can be imaged using a compact hodoscope placed on or near the surface. Challenges include background rejection and correction for variations in muon production with surface density. The former is being addressed by hodoscope design. Surface density variations can be determined via radar or muon limb imaging. The performance of muon imaging is evaluated for prospective NEO interior-mapping missions.

  13. A multi-object statistical atlas adaptive for deformable registration errors in anomalous medical image segmentation

    NASA Astrophysics Data System (ADS)

    Botter Martins, Samuel; Vallin Spina, Thiago; Yasuda, Clarissa; Falcão, Alexandre X.

    2017-02-01

    Statistical Atlases have played an important role towards automated medical image segmentation. However, a challenge has been to make the atlas more adaptable to possible errors in deformable registration of anomalous images, given that the body structures of interest for segmentation might present significant differences in shape and texture. Recently, deformable registration errors have been accounted by a method that locally translates the statistical atlas over the test image, after registration, and evaluates candidate objects from a delineation algorithm in order to choose the best one as final segmentation. In this paper, we improve its delineation algorithm and extend the model to be a multi-object statistical atlas, built from control images and adaptable to anomalous images, by incorporating a texture classifier. In order to provide a first proof of concept, we instantiate the new method for segmenting, object-by-object and all objects simultaneously, the left and right brain hemispheres, and the cerebellum, without the brainstem, and evaluate it on MRT1-images of epilepsy patients before and after brain surgery, which removed portions of the temporal lobe. The results show efficiency gain with statistically significant higher accuracy, using the mean Average Symmetric Surface Distance, with respect to the original approach.

  14. The dark side of gloss.

    PubMed

    Kim, Juno; Marlow, Phillip J; Anderson, Barton L

    2012-11-01

    Our visual system relies on the image structure generated by the interaction of light with objects to infer their material properties. One widely studied surface property is gloss, which can provide information that an object is smooth, shiny or wet. Studies have historically focused on the role of specular highlights in modulating perceived gloss. Here we show in human observers that glossy surfaces can generate both bright specular highlights and dark specular 'lowlights', and that the presence of either is sufficient to generate compelling percepts of gloss. We show that perceived gloss declines when the image structure generated by specular lowlights is blurred or misaligned with surrounding surface shading and that perceived gloss can arise from the presence of lowlights in surface regions isolated from highlights. These results suggest that the image structure generated by specular highlights and lowlights is used to construct our experience of surface gloss.

  15. Invisibility cloak with image projection capability

    PubMed Central

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-01-01

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays. PMID:27958334

  16. Invisibility cloak with image projection capability

    NASA Astrophysics Data System (ADS)

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-12-01

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays.

  17. Invisibility cloak with image projection capability.

    PubMed

    Banerjee, Debasish; Ji, Chengang; Iizuka, Hideo

    2016-12-13

    Investigations of invisibility cloaks have been led by rigorous theories and such cloak structures, in general, require extreme material parameters. Consequently, it is challenging to realize them, particularly in the full visible region. Due to the insensitivity of human eyes to the polarization and phase of light, cloaking a large object in the full visible region has been recently realized by a simplified theory. Here, we experimentally demonstrate a device concept where a large object can be concealed in a cloak structure and at the same time any images can be projected through it by utilizing a distinctively different approach; the cloaking via one polarization and the image projection via the other orthogonal polarization. Our device structure consists of commercially available optical components such as polarizers and mirrors, and therefore, provides a significant further step towards practical application scenarios such as transparent devices and see-through displays.

  18. Multiview hyperspectral topography of tissue structural and functional characteristics

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Huang, Jiwei; Zhang, Shiwu; Xu, Ronald X.

    2016-01-01

    Accurate and in vivo characterization of structural, functional, and molecular characteristics of biological tissue will facilitate quantitative diagnosis, therapeutic guidance, and outcome assessment in many clinical applications, such as wound healing, cancer surgery, and organ transplantation. We introduced and tested a multiview hyperspectral imaging technique for noninvasive topographic imaging of cutaneous wound oxygenation. The technique integrated a multiview module and a hyperspectral module in a single portable unit. Four plane mirrors were cohered to form a multiview reflective mirror set with a rectangular cross section. The mirror set was placed between a hyperspectral camera and the target biological tissue. For a single image acquisition task, a hyperspectral data cube with five views was obtained. The five-view hyperspectral image consisted of a main objective image and four reflective images. Three-dimensional (3-D) topography of the scene was achieved by correlating the matching pixels between the objective image and the reflective images. 3-D mapping of tissue oxygenation was achieved using a hyperspectral oxygenation algorithm. The multiview hyperspectral imaging technique was validated in a wound model, a tissue-simulating blood phantom, and in vivo biological tissue. The experimental results demonstrated the technical feasibility of using multiview hyperspectral imaging for 3-D topography of tissue functional properties.

  19. The Role of Binocular Disparity in Stereoscopic Images of Objects in the Macaque Anterior Intraparietal Area

    PubMed Central

    Romero, Maria C.; Van Dromme, Ilse C. L.; Janssen, Peter

    2013-01-01

    Neurons in the macaque Anterior Intraparietal area (AIP) encode depth structure in random-dot stimuli defined by gradients of binocular disparity, but the importance of binocular disparity in real-world objects for AIP neurons is unknown. We investigated the effect of binocular disparity on the responses of AIP neurons to images of real-world objects during passive fixation. We presented stereoscopic images of natural and man-made objects in which the disparity information was congruent or incongruent with disparity gradients present in the real-world objects, and images of the same objects where such gradients were absent. Although more than half of the AIP neurons were significantly affected by binocular disparity, the great majority of AIP neurons remained image selective even in the absence of binocular disparity. AIP neurons tended to prefer stimuli in which the depth information derived from binocular disparity was congruent with the depth information signaled by monocular depth cues, indicating that these monocular depth cues have an influence upon AIP neurons. Finally, in contrast to neurons in the inferior temporal cortex, AIP neurons do not represent images of objects in terms of categories such as animate-inanimate, but utilize representations based upon simple shape features including aspect ratio. PMID:23408970

  20. Quadratic grating apodized photon sieves for simultaneous multiplane microscopy

    NASA Astrophysics Data System (ADS)

    Cheng, Yiguang; Zhu, Jiangping; He, Yu; Tang, Yan; Hu, Song; Zhao, Lixin

    2017-10-01

    We present a new type of imaging device, named quadratic grating apodized photon sieve (QGPS), used as the objective for simultaneous multiplane imaging in X-rays. The proposed QGPS is structured based on the combination of two concepts: photon sieves and quadratic gratings. Its design principles are also expounded in detail. Analysis of imaging properties of QGPS in terms of point-spread function shows that QGPS can image multiple layers within an object field onto a single image plane. Simulated and experimental results in visible light both demonstrate the feasibility of QGPS for simultaneous multiplane imaging, which is extremely promising to detect dynamic specimens by X-ray microscopy in the physical and life sciences.

  1. Skeletonization applied to magnetic resonance angiography images

    NASA Astrophysics Data System (ADS)

    Nystroem, Ingela

    1998-06-01

    When interpreting and analyzing magnetic resonance angiography images, the 3D overall tree structure and the thickness of the blood vessels are of interest. This shape information may be easier to obtain from the skeleton of the blood vessels. Skeletonization of digital volume objects denotes either reduction to a 2D structure consisting of 3D surfaces, and curves, or reduction to a 1D structure consisting of 3D curves only. Thin elongated objects, such as blood vessels, are well suited for reduction to curve skeletons. Our results indicate that the tree structure of the vascular system is well represented by the skeleton. Positions for possible artery stenoses may be identified by locating local minima in curve skeletons, where the skeletal voxels are labeled with the distance to the original background.

  2. Joint multi-object registration and segmentation of left and right cardiac ventricles in 4D cine MRI

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Kepp, Timo; Schmidt-Richberg, Alexander; Handels, Heinz

    2014-03-01

    The diagnosis of cardiac function based on cine MRI requires the segmentation of cardiac structures in the images, but the problem of automatic cardiac segmentation is still open, due to the imaging characteristics of cardiac MR images and the anatomical variability of the heart. In this paper, we present a variational framework for joint segmentation and registration of multiple structures of the heart. To enable the simultaneous segmentation and registration of multiple objects, a shape prior term is introduced into a region competition approach for multi-object level set segmentation. The proposed algorithm is applied for simultaneous segmentation of the myocardium as well as the left and right ventricular blood pool in short axis cine MRI images. Two experiments are performed: first, intra-patient 4D segmentation with a given initial segmentation for one time-point in a 4D sequence, and second, a multi-atlas segmentation strategy is applied to unseen patient data. Evaluation of segmentation accuracy is done by overlap coefficients and surface distances. An evaluation based on clinical 4D cine MRI images of 25 patients shows the benefit of the combined approach compared to sole registration and sole segmentation.

  3. In vivo measurements of structure/electrode position changes during respiration for Electrical Impedance Tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Qin, Lihong; Allen, Tadashi; Patterson, Robert

    2010-04-01

    For pulmonary applications of EIT systems, the electrodes are placed around the chest in a 2D ring, and the images are reconstructed based on the assumptions that the object is rigid and the measured resistivity change in EIT images is only caused by the actual resistivity change of tissue. Structural changes are rarely considered. Previous studies have shown that structural changes which result in tissue/organ and electrode position change tend to introduce artifacts to EIT images of the thorax. Since EIT reconstruction is an ill-posed inverse problem, any inaccurate assumptions of object may cause large artifacts in reconstructed images. Accurate information on structure/electrode position changes is necessary to understand factors contributing to the measured resistivity changes and to improve EIT reconstruction algorithm. In this study, in vivo structure/electrode position changes from a healthy male volunteer are investigated during respiration cycle at two levels, the nipple line level and the level approximately 5 cm below. For each level, sixteen fiduciary markers are equally spaced around the surface, the same as the electrode placement for EIT measurements. A MR scanner with respiration-gated ability is used to acquire images of the thorax. MR thoracic images are prospectively acquired corresponding temporally to specific time periods within respiration cycle (FRC, mid tidal volume, tidal volume). The chest expansions in anterior-posterior and lateral directions and inside tissue/organ position changes are then analyzed. The electrode position changes corresponding to different phases of respiration cycle are also measured.

  4. TRENCADIS--a WSRF grid MiddleWare for managing DICOM structured reporting objects.

    PubMed

    Blanquer, Ignacio; Hernandez, Vicente; Segrelles, Damià

    2006-01-01

    The adoption of the digital processing of medical data, especially on radiology, has leaded to the availability of millions of records (images and reports). However, this information is mainly used at patient level, being the extraction of information, organised according to administrative criteria, which make the extraction of knowledge difficult. Moreover, legal constraints make the direct integration of information systems complex or even impossible. On the other side, the widespread of the DICOM format has leaded to the inclusion of other information different from just radiological images. The possibility of coding radiology reports in a structured form, adding semantic information about the data contained in the DICOM objects, eases the process of structuring images according to content. DICOM Structured Reporting (DICOM-SR) is a specification of tags and sections to code and integrate radiology reports, with seamless references to findings and regions of interests of the associated images, movies, waveforms, signals, etc. The work presented in this paper aims at developing of a framework to efficiently and securely share medical images and radiology reports, as well as to provide high throughput processing services. This system is based on a previously developed architecture in the framework of the TRENCADIS project, and uses other components such as the security system and the Grid processing service developed in previous activities. The work presented here introduces a semantic structuring and an ontology framework, to organise medical images considering standard terminology and disease coding formats (SNOMED, ICD9, LOINC..).

  5. Holographic Imaging of Evolving Laser-Plasma Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downer, Michael; Shvets, G.

    In the 1870s, English photographer Eadweard Muybridge captured motion pictures within one cycle of a horse’s gallop, which settled a hotly debated question of his time by showing that the horse became temporarily airborne. In the 1940s, Manhattan project photographer Berlin Brixner captured a nuclear blast at a million frames per second, and resolved a dispute about the explosion’s shape and speed. In this project, we developed methods to capture detailed motion pictures of evolving, light-velocity objects created by a laser pulse propagating through matter. These objects include electron density waves used to accelerate charged particles, laser-induced refractive index changesmore » used for micromachining, and ionization tracks used for atmospheric chemical analysis, guide star creation and ranging. Our “movies”, like Muybridge’s and Brixner’s, are obtained in one shot, since the laser-created objects of interest are insufficiently repeatable for accurate stroboscopic imaging. Our high-speed photographs have begun to resolve controversies about how laser-created objects form and evolve, questions that previously could be addressed only by intensive computer simulations based on estimated initial conditions. Resolving such questions helps develop better tabletop particle accelerators, atmospheric ranging devices and many other applications of laser-matter interactions. Our photographic methods all begin by splitting one or more “probe” pulses from the laser pulse that creates the light-speed object. A probe illuminates the object and obtains information about its structure without altering it. We developed three single-shot visualization methods that differ in how the probes interact with the object of interest or are recorded. (1) Frequency-Domain Holography (FDH). In FDH, there are 2 probes, like “object” and “reference” beams in conventional holography. Our “object” probe surrounds the light-speed object, like a fleas swarming around a sprinting animal. The object modifies the probe, imprinting information about its structure. Meanwhile, our “reference” probe co-propagates ahead of the object, free of its influence. After the interaction, object and reference combine to record a hologram. For technical reasons, our recording device is a spectrometer (a frequency-measuring device), hence the name “frequency-domain” holography. We read the hologram electronically to obtain a “snapshot” of the object’s average structure as it transits the medium. Our published work shows numerous snapshots of electron density waves (“laser wakes”) in ionized gas (“plasma”), analogous to a water wake behind a boat. Such waves are the basis of tabletop particle accelerators, in which charged particles surf on the light-speed wave, gaining energy. Comparing our snapshots to computer simulations deepens understanding of laser wakes. FDH takes snapshots of objects that are quasi-static --- i.e. like Muybridge’s horse standing still on a treadmill. If the object changes shape, FDH images blur, as when a subject moves while a camera shutter is open. Many laser-generated objects of interest do evolve as they propagate. To overcome this limit of FDH, we developed .... (2) Frequency-Domain Tomography (FDT). In FDT, 5 to 10 probe pulses are fired simultaneously across the object’s path at different angles, like a crossfire of bullets. The object imprints a “streaked” record of its evolution on each probe, which we record as in FDH, then recover a multi-frame “movie” of the object’s evolving structure using algorithms of computerized tomography. When propagation distance exceeds a few millimeters, reconstructed FDT images distort. This is because the lenses that image probes to detector have limited depth of field, like cameras that cannot focus simultaneously on both nearby and distant objects. But some laser-generated objects of interest propagate over meters. For these applications we developed … (3) Multi-Object-Plane Phase-Contrast Imaging (MOP-PCI). In MOP-PCI, we image FDT-like probes to the detector from multiple “object planes” --- like recording an event simultaneously with several cameras, some focused on nearby, others on distant, objects. To increase sensitivity, we exploit a phase-contrast imaging technique developed by Dutch Nobel laureate Fritz Zernike in the 1930s. Using MOP-PCI we recorded single-shot movies of laser pulse tracks through more than 10 cm of air. We plan to record images of meter-long tracks of electron bunches propagating through plasma in an experiment at the Stanford Linear Accelerator Center (SLAC). This will help SLAC scientists understand, optimize and scale small plasma-based particle accelerators that have applications in medicine, industry, materials science and high-energy physics.« less

  6. USAKA Long Range Planning Study

    DTIC Science & Technology

    1990-03-01

    effects. Thus, the additional metric potential of RV imaging is not being realized. 3.3.2 Location Determination The location determination function...deceleration), and radiometric measurements allowing determination of object thermal dynamics and modulation by e.g., tumbling. Key issues involved in these... imaging mode, which is based on ISAR principles, allows determination of object structure and free-body and reentry dynamics, while the metric mode again

  7. Visiting two objects in the field of the ring galaxy HRG 2302

    NASA Astrophysics Data System (ADS)

    Faúndez-Abans, M.; Reshetnikov, V. P.; de Oliveira-Abans, M.; Krabbe, A. C.; da Rocha-Poppe, P. C.; Fernandes-Martin, V. A.; Amôres, E. B.; Freitas-Lemes, P.

    2015-02-01

    Aims: We investigate the nature of two galaxies that are located in the field of the ring galaxy HRG 2302. Methods: This study is based on direct BVRI imaging and long-slit spectrophotometric data in the range of 4000-9500 Å obtained with the 1.6 m telescope of the Observatório do Pico dos Dias, Brazil. The spectra were used to determine the radial velocity. Results: The primary objective of the retrieval of the photometric data was to identify the fine structures of objects H and I. In addition, we performed image processing and made a photometric analysis to obtain the integrated standard BVRI magnitudes. The contour maps show evidence of material connecting both galaxies, suggesting that they might be interacting close companions. We estimated redshifts of z = 0.0689 and z = 0.0692. The spectra of the two galaxies resemble those of an early-type galaxy. The fact that the objects have a small radial-velocity difference and the structures around object I suggest an ongoing tidal interaction between the two galaxies. Conclusions: The H-I system seems to be composed of two early-type spiral galaxies (S0/Sa). Galaxy I shows evidence of tidal perturbation: an off-centered bulge, some material extending along the NE direction, and structures that have been enhanced by image filtering procedures. There are some dwarf objects around it. Neither object shows evidence of nuclear activity. Based on observations carried out at the Observatório do Pico dos Dias (OPD), which is operated by LNA/MCTI, and public data from the LNA database.

  8. High resolution Talbot self-imaging applied to structural characterization of self-assembled monolayers of microspheres.

    PubMed

    Garcia-Sucerquia, J; Alvarez-Palacio, D C; Kreuzer, H J

    2008-09-10

    We report the observation of the Talbot self-imaging effect in high resolution digital in-line holographic microscopy (DIHM) and its application to structural characterization of periodic samples. Holograms of self-assembled monolayers of micron-sized polystyrene spheres are reconstructed at different image planes. The point-source method of DIHM and the consequent high lateral resolution allows the true image (object) plane to be identified. The Talbot effect is then exploited to improve the evaluation of the pitch of the assembly and to examine defects in its periodicity.

  9. Visualization index for image-enabled medical records

    NASA Astrophysics Data System (ADS)

    Dong, Wenjie; Zheng, Weilin; Sun, Jianyong; Zhang, Jianguo

    2011-03-01

    With the widely use of healthcare information technology in hospitals, the patients' medical records are more and more complex. To transform the text- or image-based medical information into easily understandable and acceptable form for human, we designed and developed an innovation indexing method which can be used to assign an anatomical 3D structure object to every patient visually to store indexes of the patients' basic information, historical examined image information and RIS report information. When a doctor wants to review patient historical records, he or she can first load the anatomical structure object and the view the 3D index of this object using a digital human model tool kit. This prototype system helps doctors to easily and visually obtain the complete historical healthcare status of patients, including large amounts of medical data, and quickly locate detailed information, including both reports and images, from medical information systems. In this way, doctors can save time that may be better used to understand information, obtain a more comprehensive understanding of their patients' situations, and provide better healthcare services to patients.

  10. LASER BIOLOGY AND MEDICINE: Visualisation of details of a complicated inner structure of model objects by the method of diffusion optical tomography

    NASA Astrophysics Data System (ADS)

    Tret'yakov, Evgeniy V.; Shuvalov, Vladimir V.; Shutov, I. V.

    2002-11-01

    An approximate algorithm is tested for solving the problem of diffusion optical tomography in experiments on the visualisation of details of the inner structure of strongly scattering model objects containing scattering and semitransparent inclusions, as well as absorbing inclusions located inside other optical inhomogeneities. The stability of the algorithm to errors is demonstrated, which allows its use for a rapid (2 — 3 min) image reconstruction of the details of objects with a complicated inner structure.

  11. Adaptive removal of background and white space from document images using seam categorization

    NASA Astrophysics Data System (ADS)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  12. Applications of Micro-CT scanning in medicine and dentistry: Microstructural analyses of a Wistar Rat mandible and a urinary tract stone

    NASA Astrophysics Data System (ADS)

    Latief, F. D. E.; Sari, D. S.; Fitri, L. A.

    2017-08-01

    High-resolution tomographic imaging by means of x-ray micro-computed tomography (μCT) has been widely utilized for morphological evaluations in dentistry and medicine. The use of μCT follows a standard procedure: image acquisition, reconstruction, processing, evaluation using image analysis, and reporting of results. This paper discusses methods of μCT using a specific scanning device, the Bruker SkyScan 1173 High Energy Micro-CT. We present a description of the general workflow, information on terminology for the measured parameters and corresponding units, and further analyses that can potentially be conducted with this technology. Brief qualitative and quantitative analyses, including basic image processing (VOI selection and thresholding) and measurement of several morphometrical variables (total VOI volume, object volume, percentage of total volume, total VOI surface, object surface, object surface/volume ratio, object surface density, structure thickness, structure separation, total porosity) were conducted on two samples, the mandible of a wistar rat and a urinary tract stone, to illustrate the abilities of this device and its accompanying software package. The results of these analyses for both samples are reported, along with a discussion of the types of analyses that are possible using digital images obtained with a μCT scanning device, paying particular attention to non-diagnostic ex vivo research applications.

  13. Model-based object classification using unification grammars and abstract representations

    NASA Astrophysics Data System (ADS)

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  14. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  15. A unified framework for penalized statistical muon tomography reconstruction with edge preservation priors of lp norm type

    NASA Astrophysics Data System (ADS)

    Yu, Baihui; Zhao, Ziran; Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi; Cheng, Jianping

    2016-01-01

    The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the lp norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l2, l1, l0.5, and an l2-0.5 mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.

  16. Segmentation of touching mycobacterium tuberculosis from Ziehl-Neelsen stained sputum smear images

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhou, Dongxiang; Liu, Yunhui

    2015-12-01

    Touching Mycobacterium tuberculosis objects in the Ziehl-Neelsen stained sputum smear images present different shapes and invisible boundaries in the adhesion areas, which increases the difficulty in objects recognition and counting. In this paper, we present a segmentation method of combining the hierarchy tree analysis with gradient vector flow snake to address this problem. The skeletons of the objects are used for structure analysis based on the hierarchy tree. The gradient vector flow snake is used to estimate the object edge. Experimental results show that the single objects composing the touching objects are successfully segmented by the proposed method. This work will improve the accuracy and practicability of the computer-aided diagnosis of tuberculosis.

  17. Most Detailed Image of the Crab Nebula

    NASA Image and Video Library

    2005-12-01

    The Crab Nebula is one of the most intricately structured and highly dynamical objects ever observed. The new Hubble image of the Crab was assembled from 24 individual exposures taken with the NASA/ESA Hubble Space Telescope

  18. Improved biliary detection and diagnosis through intelligent machine analysis.

    PubMed

    Logeswaran, Rajasvaran

    2012-09-01

    This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  19. New approaches in renal microscopy: volumetric imaging and superresolution microscopy.

    PubMed

    Kim, Alfred H J; Suleiman, Hani; Shaw, Andrey S

    2016-05-01

    Histologic and electron microscopic analysis of the kidney has provided tremendous insight into structures such as the glomerulus and nephron. Recent advances in imaging, such as deep volumetric approaches and superresolution microscopy, have the capacity to dramatically enhance our current understanding of the structure and function of the kidney. Volumetric imaging can generate images millimeters below the surface of the intact kidney. Superresolution microscopy breaks the diffraction barrier inherent in traditional light microscopy, enabling the visualization of fine structures. Here, we describe new approaches to deep volumetric and superresolution microscopy of the kidney. Rapid advances in lasers, microscopic objectives, and tissue preparation have transformed our ability to deep volumetric image the kidney. Innovations in sample preparation have allowed for superresolution imaging with electron microscopy correlation, providing unprecedented insight into the structures within the glomerulus. Technological advances in imaging have revolutionized our capacity to image both large volumes of tissue and the finest structural details of a cell. These new advances have the potential to provide additional profound observations into the normal and pathologic functions of the kidney.

  20. Super-resolved Mirau digital holography by structured illumination

    NASA Astrophysics Data System (ADS)

    Ganjkhani, Yasaman; Charsooghi, Mohammad A.; Akhlaghi, Ehsan A.; Moradi, Ali-Reza

    2017-12-01

    In this paper, we apply structured illumination toward super-resolved 3D imaging in a common-path digital holography arrangement. Digital holographic microscopy (DHM) provides non-invasive 3D images of transparent samples as well as 3D profiles of reflective surfaces. A compact and vibration-immune arrangement for DHM may be obtained through the use of a Mirau microscope objective. However, high-magnification Mirau objectives have a low working distance and are expensive. Low-magnification ones, on the other hand, suffer from low lateral resolution. Structured illumination has been widely used for resolution improvement of intensity images, but the technique can also be readily applied to DHM. We apply structured illumination to Mirau DHM by implementing successive sinusoidal gratings with different orientations onto a spatial light modulator (SLM) and forming its image on the specimen. Moreover, we show that, instead of different orientations of 1D gratings, alternative single 2D gratings, e.g. checkerboard or hexagonal patterns, can provide resolution enhancement in multiple directions. Our results show a 35% improvement in the resolution power of the DHM. The presented arrangement has the potential to serve as a table-top device for high resolution holographic microscopy.

  1. Friend or foe: exploiting sensor failures for transparent object localization and classification

    NASA Astrophysics Data System (ADS)

    Seib, Viktor; Barthen, Andreas; Marohn, Philipp; Paulus, Dietrich

    2017-02-01

    In this work we address the problem of detecting and recognizing transparent objects using depth images from an RGB-D camera. Using this type of sensor usually prohibits the localization of transparent objects since the structured light pattern of these cameras is not reflected by transparent surfaces. Instead, transparent surfaces often appear as undefined values in the resulting images. However, these erroneous sensor readings form characteristic patterns that we exploit in the presented approach. The sensor data is fed into a deep convolutional neural network that is trained to classify and localize drinking glasses. We evaluate our approach with four different types of transparent objects. To our best knowledge, no datasets offering depth images of transparent objects exist so far. With this work we aim at closing this gap by providing our data to the public.

  2. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  3. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thickmore » concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.« less

  4. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  5. Hardy Objects in Saturn F Ring

    NASA Image and Video Library

    2017-02-24

    As NASA's Cassini spacecraft continues its weekly ring-grazing orbits, diving just past the outside of Saturn F ring, it is tracking several small, persistent objects there. These images show two such objects that Cassini originally detected in spring 2016, as the spacecraft transitioned from more equatorial orbits to orbits at increasingly high inclination about the planet's equator. Imaging team members studying these objects gave them the informal designations F16QA (right image) and F16QB (left image). The researchers have observed that objects such as these occasionally crash through the F ring's bright core, producing spectacular collisional structures.While these objects may be mostly loose agglomerations of tiny ring particles, scientists suspect that small, fairly solid bodies lurk within each object, given that they have survived several collisions with the ring since their discovery. The faint retinue of dust around them is likely the result of the most recent collision each underwent before these images were obtained. The researchers think these objects originally form as loose clumps in the F ring core as a result of perturbations triggered by Saturn's moon Prometheus. . If they survive subsequent encounters with Prometheus, their orbits can evolve, eventually leading to core-crossing clumps that produce spectacular features, even though they collide with the ring at low speeds. The images were obtained using the Cassini spacecraft narrow-angle camera on Feb. 5, 2017, at a distance of 610,000 miles (982,000 kilometers, left image) and 556,000 miles (894,000 kilometers, right image) from the F ring. Image scale is about 4 miles (6 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21432

  6. Intraoperative virtual brain counseling

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando

    1997-06-01

    Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.

  7. The Space Infrared Interferometric Telescope (SPIRIT): High-Resolution Imaging and Spectroscopy in the Far-Infrared (Preprint)

    DTIC Science & Technology

    2007-01-01

    primary scientific objectives: (1) Learn how planetary systems form from protostellar disks , and how they acquire their inhomogeneous composition; (2...characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets of different...scientific objectives: (1) Learn how planetary systems form from protostellar disks , and how they acquire their inhomogeneous composition; (2

  8. Automatic segmentation of colon glands using object-graphs.

    PubMed

    Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk

    2010-02-01

    Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.

  9. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  10. Medical image segmentation using 3D MRI data

    NASA Astrophysics Data System (ADS)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  11. The image enhancement and region of interest extraction of lobster-eye X-ray dangerous material inspection system

    NASA Astrophysics Data System (ADS)

    Zhan, Qi; Wang, Xin; Mu, Baozhong; Xu, Jie; Xie, Qing; Li, Yaran; Chen, Yifan; He, Yanan

    2016-10-01

    Dangerous materials inspection is an important technique to confirm dangerous materials crimes. It has significant impact on the prohibition of dangerous materials-related crimes and the spread of dangerous materials. Lobster-Eye Optical Imaging System is a kind of dangerous materials detection device which mainly takes advantage of backscatter X-ray. The strength of the system is its applicability to access only one side of an object, and to detect dangerous materials without disturbing the surroundings of the target material. The device uses Compton scattered x-rays to create computerized outlines of suspected objects during security detection process. Due to the grid structure of the bionic object glass, which imitate the eye of a lobster, grids contribute to the main image noise during the imaging process. At the same time, when used to inspect structured or dense materials, the image is plagued by superposition artifacts and limited by attenuation and noise. With the goal of achieving high quality images which could be used for dangerous materials detection and further analysis, we developed effective image process methods applied to the system. The first aspect of the image process is the denoising and enhancing edge contrast process, during the process, we apply deconvolution algorithm to remove the grids and other noises. After image processing, we achieve high signal-to-noise ratio image. The second part is to reconstruct image from low dose X-ray exposure condition. We developed a kind of interpolation method to achieve the goal. The last aspect is the region of interest (ROI) extraction process, which could be used to help identifying dangerous materials mixed with complex backgrounds. The methods demonstrated in the paper have the potential to improve the sensitivity and quality of x-ray backscatter system imaging.

  12. The Influence of University Image on Student Behaviour

    ERIC Educational Resources Information Center

    Alves, Helena; Raposo, Mario

    2010-01-01

    Purpose: The purpose of this paper is to analyse the influence of image on student satisfaction and loyalty. Design/methodology/approach: In order to accomplish the objectives proposed, a model reflecting the influence of image on student satisfaction and loyalty is applied. The model is tested through use of structural equations and the final…

  13. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  14. Identifying and Assessing Self-Images in Drawings by Delinquent Adolescents (in 2 Parts).

    ERIC Educational Resources Information Center

    Silver, Rawley; Ellison, JoAnne

    1995-01-01

    Examines assumption that art therapists can objectively identify self-images in drawings by troubled adolescents without talking to these youth. Findings suggest that discussion, though preferable, is not required for identifying self-images. Analysis of adolescents' drawings indicates that structured art assessment can be useful in evaluating…

  15. A novel method to detect shadows on multispectral images

    NASA Astrophysics Data System (ADS)

    Daǧlayan Sevim, Hazan; Yardımcı ćetin, Yasemin; Özışık Başkurt, Didem

    2016-10-01

    Shadowing occurs when the direct light coming from a light source is obstructed by high human made structures, mountains or clouds. Since shadow regions are illuminated only by scattered light, true spectral properties of the objects are not observed in such regions. Therefore, many object classification and change detection problems utilize shadow detection as a preprocessing step. Besides, shadows are useful for obtaining 3D information of the objects such as estimating the height of buildings. With pervasiveness of remote sensing images, shadow detection is ever more important. This study aims to develop a shadow detection method on multispectral images based on the transformation of C1C2C3 space and contribution of NIR bands. The proposed method is tested on Worldview-2 images covering Ankara, Turkey at different times. The new index is used on these 8-band multispectral images with two NIR bands. The method is compared with methods in the literature.

  16. CART V: recent advancements in computer-aided camouflage assessment

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Müller, Markus

    2011-05-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.

  17. Grid point extraction and coding for structured light system

    NASA Astrophysics Data System (ADS)

    Song, Zhan; Chung, Ronald

    2011-09-01

    A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.

  18. Structured Activities in Perceptual Training to Aid Retention of Visual and Auditory Images.

    ERIC Educational Resources Information Center

    Graves, James W.; And Others

    The experimental program in structured activities in perceptual training was said to have two main objectives: to train children in retention of visual and auditory images and to increase the children's motivation to learn. Eight boys and girls participated in the program for two hours daily for a 10-week period. The age range was 7.0 to 12.10…

  19. Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking.

    PubMed

    Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen

    2017-01-01

    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.

  20. Laser-based structural sensing and surface damage detection

    NASA Astrophysics Data System (ADS)

    Guldur, Burcu

    Damage due to age or accumulated damage from hazards on existing structures poses a worldwide problem. In order to evaluate the current status of aging, deteriorating and damaged structures, it is vital to accurately assess the present conditions. It is possible to capture the in situ condition of structures by using laser scanners that create dense three-dimensional point clouds. This research investigates the use of high resolution three-dimensional terrestrial laser scanners with image capturing abilities as tools to capture geometric range data of complex scenes for structural engineering applications. Laser scanning technology is continuously improving, with commonly available scanners now capturing over 1,000,000 texture-mapped points per second with an accuracy of ~2 mm. However, automatically extracting meaningful information from point clouds remains a challenge, and the current state-of-the-art requires significant user interaction. The first objective of this research is to use widely accepted point cloud processing steps such as registration, feature extraction, segmentation, surface fitting and object detection to divide laser scanner data into meaningful object clusters and then apply several damage detection methods to these clusters. This required establishing a process for extracting important information from raw laser-scanned data sets such as the location, orientation and size of objects in a scanned region, and location of damaged regions on a structure. For this purpose, first a methodology for processing range data to identify objects in a scene is presented and then, once the objects from model library are correctly detected and fitted into the captured point cloud, these fitted objects are compared with the as-is point cloud of the investigated object to locate defects on the structure. The algorithms are demonstrated on synthetic scenes and validated on range data collected from test specimens and test-bed bridges. The second objective of this research is to combine useful information extracted from laser scanner data with color information, which provides information in the fourth dimension that enables detection of damage types such as cracks, corrosion, and related surface defects that are generally difficult to detect using only laser scanner data; moreover, the color information also helps to track volumetric changes on structures such as spalling. Although using images with varying resolution to detect cracks is an extensively researched topic, damage detection using laser scanners with and without color images is a new research area that holds many opportunities for enhancing the current practice of visual inspections. The aim is to combine the best features of laser scans and images to create an automatic and effective surface damage detection method, which will reduce the need for skilled labor during visual inspections and allow automatic documentation of related information. This work enables developing surface damage detection strategies that integrate existing condition rating criteria for a wide range damage types that are collected under three main categories: small deformations already existing on the structure (cracks); damage types that induce larger deformations, but where the initial topology of the structure has not changed appreciably (e.g., bent members); and large deformations where localized changes in the topology of the structure have occurred (e.g., rupture, discontinuities and spalling). The effectiveness of the developed damage detection algorithms are validated by comparing the detection results with the measurements taken from test specimens and test-bed bridges.

  1. Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.

    PubMed

    Kim, Hwi; Hahn, Joonku; Lee, Byoungho

    2009-04-13

    Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.

  2. (In) Sensitivity to spatial distortion in natural scenes

    PubMed Central

    Bex, Peter J.

    2010-01-01

    The perception of object structure in the natural environment is remarkably stable under large variation in image size and projection, especially given our insensitivity to spatial position outside the fovea. Sensitivity to periodic spatial distortions that were introduced into one quadrant of gray-scale natural images was measured in a 4AFC task. Observers were able to detect the presence of distortions in unfamiliar images even though they did not significantly affect the amplitude spectrum. Sensitivity depended on the spatial period of the distortion and on the image structure at the location of the distortion. The results suggest that the detection of distortion involves decisions made in the late stages of image perception and is based on an expectation of the typical structure of natural scenes. PMID:20462324

  3. Model-based occluded object recognition using Petri nets

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Hura, Gurdeep S.

    1998-09-01

    This paper discusses the use of Petri nets to model the process of the object matching between an image and a model under different 2D geometric transformations. This transformation finds its applications in sensor-based robot control, flexible manufacturing system and industrial inspection, etc. A description approach for object structure is presented by its topological structure relation called Point-Line Relation Structure (PLRS). It has been shown how Petri nets can be used to model the matching process, and an optimal or near optimal matching can be obtained by tracking the reachability graph of the net. The experiment result shows that object can be successfully identified and located under 2D transformation such as translations, rotations, scale changes and distortions due to object occluded partially.

  4. Object-based modeling, identification, and labeling of medical images for content-based retrieval by querying on intervals of attribute values

    NASA Astrophysics Data System (ADS)

    Thies, Christian; Ostwald, Tamara; Fischer, Benedikt; Lehmann, Thomas M.

    2005-04-01

    The classification and measuring of objects in medical images is important in radiological diagnostics and education, especially when using large databases as knowledge resources, for instance a picture archiving and communication system (PACS). The main challenge is the modeling of medical knowledge and the diagnostic context to label the sought objects. This task is referred to as closing the semantic gap between low-level pixel information and high level application knowledge. This work describes an approach which allows labeling of a-priori unknown objects in an intuitive way. Our approach consists of four main components. At first an image is completely decomposed into all visually relevant partitions on different scales. This provides a hierarchical organized set of regions. Afterwards, for each of the obtained regions a set of descriptive features is computed. In this data structure objects are represented by regions with characteristic attributes. The actual object identification is the formulation of a query. It consists of attributes on which intervals are defined describing those regions that correspond to the sought objects. Since the objects are a-priori unknown, they are described by a medical expert by means of an intuitive graphical user interface (GUI). This GUI is the fourth component. It enables complex object definitions by browsing the data structure and examinating the attributes to formulate the query. The query is executed and if the sought objects have not been identified its parameterization is refined. By using this heuristic approach, object models for hand radiographs have been developed to extract bones from a single hand in different anatomical contexts. This demonstrates the applicability of the labeling concept. By using a rule for metacarpal bones on a series of 105 images, this type of bone could be retrieved with a precision of 0.53 % and a recall of 0.6%.

  5. Proton radiography for inline treatment planning and positioning verification of small animals.

    PubMed

    Müller, Johannes; Neubert, Christian; von Neubeck, Cläre; Baumann, Michael; Krause, Mechthild; Enghardt, Wolfgang; Bütof, Rebecca; Dietrich, Antje; Lühr, Armin

    2017-11-01

    As proton therapy becomes increasingly well established, there is a need for high-quality clinically relevant in vivo data to gain better insight into the radiobiological effects of proton irradiation on both healthy and tumor tissue. This requires the development of easily applicable setups that allow for efficient, fractionated, image-guided proton irradiation of small animals, the most widely used pre-clinical model. Here, a method is proposed to perform dual-energy proton radiography for inline positioning verification and treatment planning. Dual-energy proton radiography exploits the differential enhancement of object features in two successively measured two-dimensional (2D) dose distributions at two different proton energies. The two raw images show structures that are dominated by energy absorption (absorption mode) or scattering (scattering mode) of protons in the object, respectively. Data post-processing allowed for the separation of both signal contributions in the respective images. The images were evaluated regarding recognizable object details and feasibility of rigid registration to acquired planar X-ray scans. Robust, automated rigid registration of proton radiography and planar X-ray images in scattering mode could be reliably achieved with the animal bedding unit used as registration landmark. Distinguishable external and internal features of the imaged mouse included the outer body contour, the skull with substructures, the lung, abdominal structures and the hind legs. Image analysis based on the combined information of both imaging modes allowed image enhancement and calculation of 2D water-equivalent path length (WEPL) maps of the object along the beam direction. Fractionated irradiation of exposed target volumes (e.g., subcutaneous tumor model or brain) can be realized with the suggested method being used for daily positioning and range determination. Robust registration of X-ray and proton radiography images allows for the irradiation of tumor entities that require conventional computed tomography (CT)-based planning, such as orthotopic lung or brain tumors, similar to conventional patient treatment.

  6. Multi-object model-based multi-atlas segmentation for rodent brains using dense discrete correspondences

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Styner, Martin

    2016-03-01

    The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.

  7. Combining automatic tube current modulation with adaptive statistical iterative reconstruction for low-dose chest CT screening.

    PubMed

    Chen, Jiang-Hong; Jin, Er-Hu; He, Wen; Zhao, Li-Qin

    2014-01-01

    To reduce radiation dose while maintaining image quality in low-dose chest computed tomography (CT) by combining adaptive statistical iterative reconstruction (ASIR) and automatic tube current modulation (ATCM). Patients undergoing cancer screening (n = 200) were subjected to 64-slice multidetector chest CT scanning with ASIR and ATCM. Patients were divided into groups 1, 2, 3, and 4 (n = 50 each), with a noise index (NI) of 15, 20, 30, and 40, respectively. Each image set was reconstructed with 4 ASIR levels (0% ASIR, 30% ASIR, 50% ASIR, and 80% ASIR) in each group. Two radiologists assessed subjective image noise, image artifacts, and visibility of the anatomical structures. Objective image noise and signal-to-noise ratio (SNR) were measured, and effective dose (ED) was recorded. Increased NI was associated with increased subjective and objective image noise results (P<0.001), and SNR decreased with increasing NI (P<0.001). These values improved with increased ASIR levels (P<0.001). Images from all 4 groups were clinically diagnosable. Images with NI = 30 and 50% ASIR had average subjective image noise scores and nearly average anatomical structure visibility scores, with a mean objective image noise of 23.42 HU. The EDs for groups 1, 2, 3 and 4 were 2.79 ± 1.17, 1.69 ± 0.59, 0.74 ± 0.29, and 0.37 ± 0.22 mSv, respectively. Compared to group 1 (NI = 15), the ED reductions were 39.43%, 73.48%, and 86.74% for groups 2, 3, and 4, respectively. Using NI = 30 with 50% ASIR in the chest CT protocol, we obtained average or above-average image quality but a reduced ED.

  8. A Model-Based Approach for Microvasculature Structure Distortion Correction in Two-Photon Fluorescence Microscopy Images

    PubMed Central

    Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh

    2015-01-01

    SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257

  9. Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback

    NASA Astrophysics Data System (ADS)

    Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai

    2012-01-01

    With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.

  10. Multiview hyperspectral topography of tissue structural and functional characteristics

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Liu, Peng; Huang, Jiwei; Xu, Ronald

    2012-12-01

    Accurate and in vivo characterization of structural, functional, and molecular characteristics of biological tissue will facilitate quantitative diagnosis, therapeutic guidance, and outcome assessment in many clinical applications, such as wound healing, cancer surgery, and organ transplantation. However, many clinical imaging systems have limitations and fail to provide noninvasive, real time, and quantitative assessment of biological tissue in an operation room. To overcome these limitations, we developed and tested a multiview hyperspectral imaging system. The multiview hyperspectral imaging system integrated the multiview and the hyperspectral imaging techniques in a single portable unit. Four plane mirrors are cohered together as a multiview reflective mirror set with a rectangular cross section. The multiview reflective mirror set was placed between a hyperspectral camera and the measured biological tissue. For a single image acquisition task, a hyperspectral data cube with five views was obtained. The five-view hyperspectral image consisted of a main objective image and four reflective images. Three-dimensional topography of the scene was achieved by correlating the matching pixels between the objective image and the reflective images. Three-dimensional mapping of tissue oxygenation was achieved using a hyperspectral oxygenation algorithm. The multiview hyperspectral imaging technique is currently under quantitative validation in a wound model, a tissue-simulating blood phantom, and an in vivo biological tissue model. The preliminary results have demonstrated the technical feasibility of using multiview hyperspectral imaging for three-dimensional topography of tissue functional properties.

  11. Perceiving environmental structure from optical motion

    NASA Technical Reports Server (NTRS)

    Lappin, Joseph S.

    1991-01-01

    Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.

  12. Concave omnidirectional imaging device for cylindrical object based on catadioptric panoramic imaging

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojun; Wu, Yumei; Wen, Peizhi

    2018-03-01

    To obtain information on the outer surface of a cylinder object, we propose a catadioptric panoramic imaging system based on the principle of uniform spatial resolution for vertical scenes. First, the influence of the projection-equation coefficients on the spatial resolution and astigmatism of the panoramic system are discussed, respectively. Through parameter optimization, we obtain the appropriate coefficients for the projection equation, and so the imaging quality of the entire imaging system can reach an optimum value. Finally, the system projection equation is calibrated, and an undistorted rectangular panoramic image is obtained using the cylindrical-surface projection expansion method. The proposed 360-deg panoramic-imaging device overcomes the shortcomings of existing surface panoramic-imaging methods, and it has the advantages of low cost, simple structure, high imaging quality, and small distortion, etc. The experimental results show the effectiveness of the proposed method.

  13. Range and egomotion estimation from compound photodetector arrays with parallel optical axis using optical flow techniques.

    PubMed

    Chahl, J S

    2014-01-20

    This paper describes an application for arrays of narrow-field-of-view sensors with parallel optical axes. These devices exhibit some complementary characteristics with respect to conventional perspective projection or angular projection imaging devices. Conventional imaging devices measure rotational egomotion directly by measuring the angular velocity of the projected image. Translational egomotion cannot be measured directly by these devices because the induced image motion depends on the unknown range of the viewed object. On the other hand, a known translational motion generates image velocities which can be used to recover the ranges of objects and hence the three-dimensional (3D) structure of the environment. A new method is presented for computing egomotion and range using the properties of linear arrays of independent narrow-field-of-view optical sensors. An approximate parallel projection can be used to measure translational egomotion in terms of the velocity of the image. On the other hand, a known rotational motion of the paraxial sensor array generates image velocities, which can be used to recover the 3D structure of the environment. Results of tests of an experimental array confirm these properties.

  14. A contour-based shape descriptor for biomedical image classification and retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.

  15. Neurons in the human hippocampus and amygdala respond to both low- and high-level image properties

    PubMed Central

    Cabrales, Elaine; Wilson, Michael S.; Baker, Christopher P.; Thorp, Christopher K.; Smith, Kris A.; Treiman, David M.

    2011-01-01

    A large number of studies have demonstrated that structures within the medial temporal lobe, such as the hippocampus, are intimately involved in declarative memory for objects and people. Although these items are abstractions of the visual scene, specific visual details can change the speed and accuracy of their recall. By recording from 415 neurons in the hippocampus and amygdala of human epilepsy patients as they viewed images drawn from 10 image categories, we showed that the firing rates of 8% of these neurons encode image illuminance and contrast, low-level properties not directly pertinent to task performance, whereas in 7% of the neurons, firing rates encode the category of the item depicted in the image, a high-level property pertinent to the task. This simultaneous representation of high- and low-level image properties within the same brain areas may serve to bind separate aspects of visual objects into a coherent percept and allow episodic details of objects to influence mnemonic performance. PMID:21471400

  16. Do we understand high-level vision?

    PubMed

    Cox, David Daniel

    2014-04-01

    'High-level' vision lacks a single, agreed upon definition, but it might usefully be defined as those stages of visual processing that transition from analyzing local image structure to analyzing structure of the external world that produced those images. Much work in the last several decades has focused on object recognition as a framing problem for the study of high-level visual cortex, and much progress has been made in this direction. This approach presumes that the operational goal of the visual system is to read-out the identity of an object (or objects) in a scene, in spite of variation in the position, size, lighting and the presence of other nearby objects. However, while object recognition as a operational framing of high-level is intuitive appealing, it is by no means the only task that visual cortex might do, and the study of object recognition is beset by challenges in building stimulus sets that adequately sample the infinite space of possible stimuli. Here I review the successes and limitations of this work, and ask whether we should reframe our approaches to understanding high-level vision. Copyright © 2014. Published by Elsevier Ltd.

  17. [Research on Spectral Polarization Imaging System Based on Static Modulation].

    PubMed

    Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng

    2015-04-01

    The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.

  18. Recognition of upper airway and surrounding structures at MRI in pediatric PCOS and OSAS

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, J. K.; Odhner, D.; Sin, Sanghun; Arens, Raanan

    2013-03-01

    Obstructive Sleep Apnea Syndrome (OSAS) is common in obese children with risk being 4.5 fold compared to normal control subjects. Polycystic Ovary Syndrome (PCOS) has recently been shown to be associated with OSAS that may further lead to significant cardiovascular and neuro-cognitive deficits. We are investigating image-based biomarkers to understand the architectural and dynamic changes in the upper airway and the surrounding hard and soft tissue structures via MRI in obese teenage children to study OSAS. At the previous SPIE conferences, we presented methods underlying Fuzzy Object Models (FOMs) for Automatic Anatomy Recognition (AAR) based on CT images of the thorax and the abdomen. The purpose of this paper is to demonstrate that the AAR approach is applicable to a different body region and image modality combination, namely in the study of upper airway structures via MRI. FOMs were built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. FOMs encode the uncertainty and variability present in the form and relationships among the objects over a study population. Totally 11 basic objects (17 including composite) were modeled. Automatic recognition for the best pose of FOMs in a given image was implemented by using four methods - a one-shot method that does not require search, another three searching methods that include Fisher Linear Discriminate (FLD), a b-scale energy optimization strategy, and optimum threshold recognition method. In all, 30 multi-fold cross validation experiments based on 15 patient MRI data sets were carried out to assess the accuracy of recognition. The results indicate that the objects can be recognized with an average location error of less than 5 mm or 2-3 voxels. Then the iterative relative fuzzy connectedness (IRFC) algorithm was adopted for delineation of the target organs based on the recognized results. The delineation results showed an overall FP and TP volume fraction of 0.02 and 0.93.

  19. Retinal vessel enhancement based on the Gaussian function and image fusion

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian Dragoş

    2017-01-01

    The Gaussian function is essential in the construction of the Frangi and COSFIRE (combination of shifted filter responses) filters. The connection of the broken vessels and an accurate extraction of the vascular structure is the main goal of this study. Thus, the outcome of the Frangi and COSFIRE edge detection algorithms are fused using the Dempster-Shafer algorithm with the aim to improve detection and to enhance the retinal vascular structure. For objective results, the average diameters of the retinal vessels provided by Frangi, COSFIRE and Dempster-Shafer fusion algorithms are measured. These experimental values are compared to the ground truth values provided by manually segmented retinal images. We prove the superiority of the fusion algorithm in terms of image quality by using the figure of merit objective metric that correlates the effects of all post-processing techniques.

  20. Field methods to measure surface displacement and strain with the Video Image Correlation method

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  1. Imaging the inside of thick structures using cosmic rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guardincerri, E., E-mail: elenaguardincerri@lanl.gov; Durham, J. M.; Morris, C.

    2016-01-15

    The authors present here a new method to image reinforcement elements inside thick structures and the results of a demonstration measurement performed on a mock-up wall built at Los Alamos National Laboratory. The method, referred to as “multiple scattering muon radiography”, relies on the use of cosmic-ray muons as probes. The work described in this article was performed to prove the viability of the technique as a means to image the interior of the dome of Florence Cathedral Santa Maria del Fiore, one of the UNESCO World Heritage sites and among the highest profile buildings in existence. Its result showsmore » the effectiveness of the technique as a tool to radiograph thick structures and image denser object inside them.« less

  2. Imaging the inside of thick structures using cosmic rays

    DOE PAGES

    Guardincerri, E.; Durham, J. M.; Morris, C.; ...

    2016-01-01

    Here, we present a new method to image reinforcement elements inside thick structures and the results of a demonstration measurement performed on a mock-up wall built at Los Alamos National Laboratory. The method, referred to as “multiple scattering muon radiography”, relies on the use of cosmic-ray muons as probes. Our work was performed to prove the viability of the technique as a means to image the interior of the dome of Florence Cathedral Santa Maria del Fiore, one of the UNESCO World Heritage sites and among the highest profile buildings in existence. This result shows the effectiveness of the techniquemore » as a tool to radiograph thick structures and image denser object inside them.« less

  3. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    NASA Astrophysics Data System (ADS)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  4. Documenting the information content of images.

    PubMed Central

    Bidgood, W. D.

    1997-01-01

    A standards-based message and terminology architecture has been specified to enable large-scale open and non-proprietary interchange of imaging-procedure descriptions and image-interpretation reports providing semantically-rich linkage of linguistic and non-linguistic information. The DICOM Structured Reporting Supplement, now available for trial use, embodies this interdependent message/terminology architecture. A DICOM structured report object is a self-describing information structure that can be tailored to support diverse clinical observation reporting applications by utilization of templates and context-dependent terminology from an external message/terminology mapping resource such as the SNOMED DICOM Microglossary (SDM), HL7 Vocabulary, or Terminology Resource for Message Standards (TeRMS). PMID:9357661

  5. Processing And Display Of Medical Three Dimensional Arrays Of Numerical Data Using Octree Encoding

    NASA Astrophysics Data System (ADS)

    Amans, Jean-Louis; Darier, Pierre

    1986-05-01

    imaging modalities such as X-Ray computerized Tomography (CT), Nuclear Medecine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging.

  6. Three-dimensional surface profile intensity correction for spatially modulated imaging

    NASA Astrophysics Data System (ADS)

    Gioux, Sylvain; Mazhar, Amaan; Cuccia, David J.; Durkin, Anthony J.; Tromberg, Bruce J.; Frangioni, John V.

    2009-05-01

    We describe a noncontact profile correction technique for quantitative, wide-field optical measurement of tissue absorption (μa) and reduced scattering (μs') coefficients, based on geometric correction of the sample's Lambertian (diffuse) reflectance intensity. Because the projection of structured light onto an object is the basis for both phase-shifting profilometry and modulated imaging, we were able to develop a single instrument capable of performing both techniques. In so doing, the surface of the three-dimensional object could be acquired and used to extract the object's optical properties. The optical properties of flat polydimethylsiloxane (silicone) phantoms with homogenous tissue-like optical properties were extracted, with and without profilometry correction, after vertical translation and tilting of the phantoms at various angles. Objects having a complex shape, including a hemispheric silicone phantom and human fingers, were acquired and similarly processed, with vascular constriction of a finger being readily detectable through changes in its optical properties. Using profilometry correction, the accuracy of extracted absorption and reduced scattering coefficients improved from two- to ten-fold for surfaces having height variations as much as 3 cm and tilt angles as high as 40 deg. These data lay the foundation for employing structured light for quantitative imaging during surgery.

  7. A simultaneous beta and coincidence-gamma imaging system for plant leaves

    NASA Astrophysics Data System (ADS)

    Ranjbar, Homayoon; Wen, Jie; Mathews, Aswin J.; Komarov, Sergey; Wang, Qiang; Li, Ke; O'Sullivan, Joseph A.; Tai, Yuan-Chuan

    2016-05-01

    Positron emitting isotopes, such as 11C, 13N, and 18F, can be used to label molecules. The tracers, such as 11CO2, are delivered to plants to study their biological processes, particularly metabolism and photosynthesis, which may contribute to the development of plants that have a higher yield of crops and biomass. Measurements and resulting images from PET scanners are not quantitative in young plant structures or in plant leaves due to poor positron annihilation in thin objects. To address this problem we have designed, assembled, modeled, and tested a nuclear imaging system (simultaneous beta-gamma imager). The imager can simultaneously detect positrons ({β+} ) and coincidence-gamma rays (γ). The imaging system employs two planar detectors; one is a regular gamma detector which has a LYSO crystal array, and the other is a phoswich detector which has an additional BC-404 plastic scintillator for beta detection. A forward model for positrons is proposed along with a joint image reconstruction formulation to utilize the beta and coincidence-gamma measurements for estimating radioactivity distribution in plant leaves. The joint reconstruction algorithm first reconstructs beta and gamma images independently to estimate the thickness component of the beta forward model and afterward jointly estimates the radioactivity distribution in the object. We have validated the physics model and reconstruction framework through a phantom imaging study and imaging a tomato leaf that has absorbed 11CO2. The results demonstrate that the simultaneously acquired beta and coincidence-gamma data, combined with our proposed joint reconstruction algorithm, improved the quantitative accuracy of estimating radioactivity distribution in thin objects such as leaves. We used the structural similarity (SSIM) index for comparing the leaf images from the simultaneous beta-gamma imager with the ground truth image. The jointly reconstructed images yield SSIM indices of 0.69 and 0.63, whereas the separately reconstructed beta alone and gamma alone images had indices of 0.33 and 0.52, respectively.

  8. A simultaneous beta and coincidence-gamma imaging system for plant leaves.

    PubMed

    Ranjbar, Homayoon; Wen, Jie; Mathews, Aswin J; Komarov, Sergey; Wang, Qiang; Li, Ke; O'Sullivan, Joseph A; Tai, Yuan-Chuan

    2016-05-07

    Positron emitting isotopes, such as (11)C, (13)N, and (18)F, can be used to label molecules. The tracers, such as (11)CO2, are delivered to plants to study their biological processes, particularly metabolism and photosynthesis, which may contribute to the development of plants that have a higher yield of crops and biomass. Measurements and resulting images from PET scanners are not quantitative in young plant structures or in plant leaves due to poor positron annihilation in thin objects. To address this problem we have designed, assembled, modeled, and tested a nuclear imaging system (simultaneous beta-gamma imager). The imager can simultaneously detect positrons ([Formula: see text]) and coincidence-gamma rays (γ). The imaging system employs two planar detectors; one is a regular gamma detector which has a LYSO crystal array, and the other is a phoswich detector which has an additional BC-404 plastic scintillator for beta detection. A forward model for positrons is proposed along with a joint image reconstruction formulation to utilize the beta and coincidence-gamma measurements for estimating radioactivity distribution in plant leaves. The joint reconstruction algorithm first reconstructs beta and gamma images independently to estimate the thickness component of the beta forward model and afterward jointly estimates the radioactivity distribution in the object. We have validated the physics model and reconstruction framework through a phantom imaging study and imaging a tomato leaf that has absorbed (11)CO2. The results demonstrate that the simultaneously acquired beta and coincidence-gamma data, combined with our proposed joint reconstruction algorithm, improved the quantitative accuracy of estimating radioactivity distribution in thin objects such as leaves. We used the structural similarity (SSIM) index for comparing the leaf images from the simultaneous beta-gamma imager with the ground truth image. The jointly reconstructed images yield SSIM indices of 0.69 and 0.63, whereas the separately reconstructed beta alone and gamma alone images had indices of 0.33 and 0.52, respectively.

  9. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  10. Plasmonics and metamaterials based super-resolution imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Zhaowei

    2017-05-01

    In recent years, surface imaging of various biological dynamics and biomechanical phenomena has seen a surge of interest. Imaging of processes such as exocytosis and kinesin motion are most effective when depth is limited to a very thin region of interest at the edge of the cell or specimen. However, many objects and processes of interest are of size scales below the diffraction limit for safe, visible wavelength illumination. Super-resolution imaging methods such as structured illumination microscopy and others have offered various compromises between resolution, imaging speed, and bio-compatibility. In this talk, I will present our most recent progress in plasmonic structured illumination microscopy (PSIM) and localized plasmonic structured illumination microscopy (LPSIM), and their applications in bio-imaging. We have achieved wide-field surface imaging with resolution down to 75 nm while maintaining reasonable speed and compatibility with biological specimens. These plasmonic enhanced super resolution techniques offer unique solutions to obtain 50nm spatial resolution and 50 frames per second wide imaging speed at the same time.

  11. Super-resolution differential interference contrast microscopy by structured illumination.

    PubMed

    Chen, Jianling; Xu, Yan; Lv, Xiaohua; Lai, Xiaomin; Zeng, Shaoqun

    2013-01-14

    We propose a structured illumination differential interference contrast (SI-DIC) microscopy, breaking the diffraction resolution limit of differential interference contrast (DIC) microscopy. SI-DIC extends the bandwidth of coherent transfer function of the DIC imaging system, thus the resolution is improved. With 0.8 numerical aperture condenser and objective, the reconstructed SI-DIC image of 53 nm polystyrene beads reveals lateral resolution of approximately 190 nm, doubling that of the conventional DIC image. We also demonstrate biological observations of label-free cells with improved spatial resolution. The SI-DIC microscopy can provide sub-diffraction resolution and high contrast images with marker-free specimens, and has the potential for achieving sub-diffraction resolution quantitative phase imaging.

  12. Electric Potential and Electric Field Imaging with Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Ed

    2016-01-01

    The technology and techniques for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field may be used for (illuminating) volumes to be inspected with EFI. The baseline sensor technology, electric field sensor (e-sensor), and its construction, optional electric field generation (quasistatic generator), and current e-sensor enhancements (ephemeral e-sensor) are discussed. Demonstrations for structural, electronic, human, and memory applications are shown. This new EFI capability is demonstrated to reveal characterization of electric charge distribution, creating a new field of study that embraces areas of interest including electrostatic discharge mitigation, crime scene forensics, design and materials selection for advanced sensors, dielectric morphology of structures, inspection of containers, inspection for hidden objects, tether integrity, organic molecular memory, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.

  13. Semantics and technologies in modern design of interior stairs

    NASA Astrophysics Data System (ADS)

    Kukhta, M.; Sokolov, A.; Pelevin, E.

    2015-10-01

    Use of metal in the design of interior stairs presents new features for shaping, and can be implemented using different technologies. The article discusses the features of design and production technologies of forged metal spiral staircase considering the image semantics based on the historical and cultural heritage. To achieve the objective was applied structural- semantic method (to identify the organization of structure and semantic features of the artistic image), engineering methods (to justify the construction of the object), anthropometry method and ergonomics (to provide usability), methods of comparative analysis (to reveale the features of the way the ladder in different periods of culture). According to the research results are as follows. Was revealed the semantics influence on the design of interior staircase that is based on the World Tree image. Also was suggested rational calculation of steps to ensure the required strength. And finally was presented technology, providing the realization of the artistic image. In the practical part of the work is presented version of forged staircase.

  14. Faint Object Camera observations of M87 - The jet and nucleus

    NASA Technical Reports Server (NTRS)

    Boksenberg, A.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.; Kamperman, T. M.

    1992-01-01

    UV and optical images of the central region and jet of the nearby elliptical galaxy M87 have been obtained with about 0.1 arcsec resolution in several spectral bands with the Faint Object Camera (FOC) on the HST, including polarization images. Deconvolution enhances the contrast of the complex structure and filamentary patterns in the jet already evident in the aberrated images. Morphologically there is close similarity between the FOC images of the extended jet and the best 2-cm radio maps obtained at similar resolution, and the magnetic field vectors from the UV and radio polarimetric data also correspond well. We observe structure in the inner jet within a few tenths arcsec of the nucleus which also has been well studied at radio wavelengths. Our UV and optical photometry of regions along the jet shows little variation in spectral index from the value 1.0 between markedly different regions and no trend to a steepening spectrum with distance along the jet.

  15. Measurement of spatial refractive index distributions of fusion spliced optical fibers by digital holographic microtomography

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Deng, Yating; Ma, Xichao; Xiao, Wen

    2017-11-01

    Digital holographic microtomography is improved and applied to the measurements of three-dimensional refractive index distributions of fusion spliced optical fibers. Tomographic images are reconstructed from full-angle phase projection images obtained with a setup-rotation approach, in which the laser source, the optical system and the image sensor are arranged on an optical breadboard and synchronously rotated around the fixed object. For retrieving high-quality tomographic images, a numerical method is proposed to compensate the unwanted movements of the object in the lateral, axial and vertical directions during rotation. The compensation is implemented on the two-dimensional phase images instead of the sinogram. The experimental results exhibit distinctly the internal structures of fusion splices between a single-mode fiber and other fibers, including a multi-mode fiber, a panda polarization maintaining fiber, a bow-tie polarization maintaining fiber and a photonic crystal fiber. In particular, the internal structure distortion in the fusion areas can be intuitively observed, such as the expansion of the stress zones of polarization maintaining fibers, the collapse of the air holes of photonic crystal fibers, etc.

  16. Monte-Carlo simulation of OCT structural images of human skin using experimental B-scans and voxel based approach to optical properties distribution

    NASA Astrophysics Data System (ADS)

    Frolov, S. V.; Potlov, A. Yu.; Petrov, D. A.; Proskurin, S. G.

    2017-03-01

    A method of optical coherence tomography (OCT) structural images reconstruction using Monte Carlo simulations is described. Biological object is considered as a set of 3D elements that allow simulation of media, structure of which cannot be described analytically. Each voxel is characterized by its refractive index and anisotropy parameter, scattering and absorption coefficients. B-scans of the inner structure are used to reconstruct a simulated image instead of analytical representation of the boundary geometry. Henye-Greenstein scattering function, Beer-Lambert-Bouguer law and Fresnel equations are used for photon transport description. Efficiency of the described technique is checked by the comparison of the simulated and experimentally acquired A-scans.

  17. Phase contrast portal imaging using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Umetani, K.; Kondoh, T.

    2014-07-01

    Microbeam radiation therapy is an experimental form of radiation treatment with great potential to improve the treatment of many types of cancer. We applied a synchrotron radiation phase contrast technique to portal imaging to improve targeting accuracy for microbeam radiation therapy in experiments using small animals. An X-ray imaging detector was installed 6.0 m downstream from an object to produce a high-contrast edge enhancement effect in propagation-based phase contrast imaging. Images of a mouse head sample were obtained using therapeutic white synchrotron radiation with a mean beam energy of 130 keV. Compared to conventional portal images, remarkably clear images of bones surrounding the cerebrum were acquired in an air environment for positioning brain lesions with respect to the skull structure without confusion with overlapping surface structures.

  18. Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating

    NASA Astrophysics Data System (ADS)

    Heintzmann, Rainer; Cremer, Christoph G.

    1999-01-01

    High spatial frequencies in the illuminating light of microscopes lead to a shift of the object spatial frequencies detectable through the objective lens. If a suitable procedure is found for evaluation of the measured data, a microscopic image with a higher resolution than under flat illumination can be obtained. A simple method for generation of a laterally modulated illumination pattern is discussed here. A specially constructed diffraction grating was inserted in the illumination beam path at the conjugate object plane (position of the adjustable aperture) and projected through the objective into the object. Microscopic beads were imaged with this method and evaluated with an algorithm based on the structure of the Fourier space. The results indicate an improvement of resolution.

  19. Super-resolution photoacoustic microscopy using joint sparsity

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Haltmeier, M.; Berer, T.; Leiss-Holzinger, E.; Murray, T. W.

    2017-07-01

    We present an imaging method that uses the random optical speckle patterns that naturally emerge as light propagates through strongly scattering media as a structured illumination source for photoacoustic imaging. Our approach, termed blind structured illumination photoacoustic microscopy (BSIPAM), was inspired by recent work in fluorescence microscopy where super-resolution imaging was demonstrated using multiple unknown speckle illumination patterns. We extend this concept to the multiple scattering domain using photoacoustics (PA), with the speckle pattern serving to generate ultrasound. The optical speckle pattern that emerges as light propagates through diffuse media provides structured illumination to an object placed behind a scattering wall. The photoacoustic signal produced by such illumination is detected using a focused ultrasound transducer. We demonstrate through both simulation and experiment, that by acquiring multiple photoacoustic images, each produced by a different random and unknown speckle pattern, an image of an absorbing object can be reconstructed with a spatial resolution far exceeding that of the ultrasound transducer. We experimentally and numerically demonstrate a gain in resolution of more than a factor of two by using multiple speckle illuminations. The variations in the photoacoustic signals generated with random speckle patterns are utilized in BSIPAM using a novel reconstruction algorithm. Exploiting joint sparsity, this algorithm is capable of reconstructing the absorbing structure from measured PA signals with a resolution close to the speckle size. Another way to excite random excitation for photoacoustic imaging are small absorbing particles, including contrast agents, which flow through small vessels. For such a set-up, the joint-sparsity is generated by the fact that all the particles move in the same vessels. Structured illumination in that case is not necessary.

  20. Standardizing Quality Assessment of Fused Remotely Sensed Images

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Moellmann, J.; Fries, K.

    2017-09-01

    The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.

  1. Low-dose dynamic myocardial perfusion CT image reconstruction using pre-contrast normal-dose CT scan induced structure tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Gong, Changfei; Han, Ce; Gan, Guanghui; Deng, Zhenxiang; Zhou, Yongqiang; Yi, Jinling; Zheng, Xiaomin; Xie, Congying; Jin, Xiance

    2017-04-01

    Dynamic myocardial perfusion CT (DMP-CT) imaging provides quantitative functional information for diagnosis and risk stratification of coronary artery disease by calculating myocardial perfusion hemodynamic parameter (MPHP) maps. However, the level of radiation delivered by dynamic sequential scan protocol can be potentially high. The purpose of this work is to develop a pre-contrast normal-dose scan induced structure tensor total variation regularization based on the penalized weighted least-squares (PWLS) criteria to improve the image quality of DMP-CT with a low-mAs CT acquisition. For simplicity, the present approach was termed as ‘PWLS-ndiSTV’. Specifically, the ndiSTV regularization takes into account the spatial-temporal structure information of DMP-CT data and further exploits the higher order derivatives of the objective images to enhance denoising performance. Subsequently, an effective optimization algorithm based on the split-Bregman approach was adopted to minimize the associative objective function. Evaluations with modified dynamic XCAT phantom and preclinical porcine datasets have demonstrated that the proposed PWLS-ndiSTV approach can achieve promising gains over other existing approaches in terms of noise-induced artifacts mitigation, edge details preservation, and accurate MPHP maps calculation.

  2. Binocular stereo matching method based on structure tensor

    NASA Astrophysics Data System (ADS)

    Song, Xiaowei; Yang, Manyi; Fan, Yubo; Yang, Lei

    2016-10-01

    In a binocular visual system, to recover the three-dimensional information of the object, the most important step is to acquire matching points. Structure tensor is the vector representation of each point in its local neighborhood. Therefore, structure tensor performs well in region detection of local structure, and it is very suitable for detecting specific graphics such as pedestrians, cars and road signs in the image. In this paper, the structure tensor is combined with the luminance information to form the extended structure tensor. The directional derivatives of luminance in x and y directions are calculated, so that the local structure of the image is more prominent. Meanwhile, the Euclidean distance between the eigenvectors of key points is used as the similarity determination metric of key points in the two images. By matching, the coordinates of the matching points in the detected target are precisely acquired. In this paper, experiments were performed on the captured left and right images. After the binocular calibration, image matching was done to acquire the matching points, and then the target depth was calculated according to these matching points. By comparison, it is proved that the structure tensor can accurately acquire the matching points in binocular stereo matching.

  3. Photogrammetry in 3d Modelling of Human Bone Structures from Radiographs

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2017-05-01

    Photogrammetry can have great impact on the success of medical processes for diagnosis, treatment and surgeries. Precise 3D models which can be achieved by photogrammetry improve considerably the results of orthopedic surgeries and processes. Usual 3D imaging techniques, computed tomography (CT) and magnetic resonance imaging (MRI), have some limitations such as being used only in non-weight-bearing positions, costs and high radiation dose(for CT) and limitations of MRI for patients with ferromagnetic implants or objects in their bodies. 3D reconstruction of bony structures from biplanar X-ray images is a reliable and accepted alternative for achieving accurate 3D information with low dose radiation in weight-bearing positions. The information can be obtained from multi-view radiographs by using photogrammetry. The primary step for 3D reconstruction of human bone structure from medical X-ray images is calibration which is done by applying principles of photogrammetry. After the calibration step, 3D reconstruction can be done using efficient methods with different levels of automation. Because of the different nature of X-ray images from optical images, there are distinct challenges in medical applications for calibration step of stereoradiography. In this paper, after demonstrating the general steps and principles of 3D reconstruction from X-ray images, a comparison will be done on calibration methods for 3D reconstruction from radiographs and they are assessed from photogrammetry point of view by considering various metrics such as their camera models, calibration objects, accuracy, availability, patient-friendly and cost.

  4. Blind Bayesian restoration of adaptive optics telescope images using generalized Gaussian Markov random field models

    NASA Astrophysics Data System (ADS)

    Jeffs, Brian D.; Christou, Julian C.

    1998-09-01

    This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.

  5. Techniques of noninvasive optical tomographic imaging

    NASA Astrophysics Data System (ADS)

    Rosen, Joseph; Abookasis, David; Gokhler, Mark

    2006-01-01

    Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.

  6. Application of Laser Imaging for Bio/geophysical Studies

    NASA Technical Reports Server (NTRS)

    Hummel, J. R.; Goltz, S. M.; Depiero, N. L.; Degloria, D. P.; Pagliughi, F. M.

    1992-01-01

    SPARTA, Inc. has developed a low-cost, portable laser imager that, among other applications, can be used in bio/geophysical applications. In the application to be discussed here, the system was utilized as an imaging system for background features in a forested locale. The SPARTA mini-ladar system was used at the International Paper Northern Experimental Forest near Howland, Maine to assist in a project designed to study the thermal and radiometric phenomenology at forest edges. The imager was used to obtain data from three complex sites, a 'seed' orchard, a forest edge, and a building. The goal of the study was to demonstrate the usefulness of the laser imager as a tool to obtain geometric and internal structure data about complex 3-D objects in a natural background. The data from these images have been analyzed to obtain information about the distributions of the objects in a scene. A range detection algorithm has been used to identify individual objects in a laser image and an edge detection algorithm then applied to highlight the outlines of discrete objects. An example of an image processed in such a manner is shown. Described here are the results from the study. In addition, results are presented outlining how the laser imaging system could be used to obtain other important information about bio/geophysical systems, such as the distribution of woody material in forests.

  7. Three-dimensional nanoscale imaging by plasmonic Brownian microscopy

    NASA Astrophysics Data System (ADS)

    Labno, Anna; Gladden, Christopher; Kim, Jeongmin; Lu, Dylan; Yin, Xiaobo; Wang, Yuan; Liu, Zhaowei; Zhang, Xiang

    2017-12-01

    Three-dimensional (3D) imaging at the nanoscale is a key to understanding of nanomaterials and complex systems. While scanning probe microscopy (SPM) has been the workhorse of nanoscale metrology, its slow scanning speed by a single probe tip can limit the application of SPM to wide-field imaging of 3D complex nanostructures. Both electron microscopy and optical tomography allow 3D imaging, but are limited to the use in vacuum environment due to electron scattering and to optical resolution in micron scales, respectively. Here we demonstrate plasmonic Brownian microscopy (PBM) as a way to improve the imaging speed of SPM. Unlike photonic force microscopy where a single trapped particle is used for a serial scanning, PBM utilizes a massive number of plasmonic nanoparticles (NPs) under Brownian diffusion in solution to scan in parallel around the unlabeled sample object. The motion of NPs under an evanescent field is three-dimensionally localized to reconstruct the super-resolution topology of 3D dielectric objects. Our method allows high throughput imaging of complex 3D structures over a large field of view, even with internal structures such as cavities that cannot be accessed by conventional mechanical tips in SPM.

  8. Comparison of the Diagnostic Image Quality of the Canine Maxillary Dentoalveolar Structures Obtained by Cone Beam Computed Tomography and 64-Multidetector Row Computed Tomography.

    PubMed

    Soukup, Jason W; Drees, Randi; Koenig, Lisa J; Snyder, Christopher J; Hetzel, Scott; Miles, Chanda R; Schwarz, Tobias

    2015-01-01

    The objective of this blinded study was to validate the use of cone beam computed tomography (C) for imaging of the canine maxillary dentoalveolar structures by comparing its diagnostic image quality with that of 64-multidetector row CT Sagittal slices of a tooth-bearing segment of the maxilla of a commercially purchased dog skull embedded in methylmethacrylate were obtained along a line parallel with the dental arch using a commercial histology diamond saw. The slice of tooth-bearing bone that best depicted the dentoalveolar structures was chosen and photographed. The maxillary segment was imaged with cone beam CT and 64-multidetector row CT. Four blinded evaluators compared the cone beam CT and 64-multidetector row CT images and image quality was scored as it related to the anatomy of dentoalveolar structures. Trabecular bone, enamel, dentin, pulp cavity, periodontal ligament space, and lamina dura were scored In addition, a score depicting the evaluators overall impression of the image was recorded. Images acquired with cone beam CT were found to be significantly superior in image quality to images acquired with 64-multidetector row CT overall, and in all scored categories. In our study setting cone beam CT was found to be a valid and clinically superior imaging modality for the canine maxillary dentoalveolar structures when compared to 64-multidetector row CT.

  9. Comparison of the Diagnostic Image Quality of the Canine Maxillary Dentoalveolar Structures Obtained by Cone Beam Computed Tomography and 64-Multidetector Row Computed Tomography

    PubMed Central

    Soukup, Jason W.; Drees, Randi; Koenig, Lisa J.; Snyder, Christopher J.; Hetzel, Scott; Miles, Chanda R.; Schwarz, Tobias

    2016-01-01

    Summary The objective of this blinded study was to validate the use of cone beam computed tomography (CT) for imaging of the canine maxillary dentoalveolar structures by comparing its diagnostic image quality with that of 64-multidetector row CT. Sagittal slices of a tooth-bearing segment of the maxilla of a commercially purchased dog skull embedded in methyl methacrylate were obtained along a line parallel with the dental arch using a commercial histology diamond saw. The slice of tooth-bearing bone that best depicted the dentoalveolar structures was chosen and photographed. The maxilla segment was imaged with cone beam CT and 64-multidetector row CT. Four blinded evaluators compared the cone beam CT and 64-multidetector row CT images and image quality was scored as it related to the anatomy of dentoalveolar structures. Trabecular bone, enamel, dentin, pulp cavity, periodontal ligament space, and lamina dura were scored. In addition, a score depicting the evaluators overall impression of the image was recorded. Images acquired with cone beam CT were found to be significantly superior in image quality to images acquired with 64-multidetector row CT overall, and in all scored categories. In our study setting, cone beam CT was found to be a valid and clinically superior imaging modality for the canine maxillary dentoalveolar structures when compared to 64-multidetector row CT. PMID:26415384

  10. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  11. Investigating the performance of reconstruction methods used in structured illumination microscopy as a function of the illumination pattern's modulation frequency

    NASA Astrophysics Data System (ADS)

    Shabani, H.; Sánchez-Ortiga, E.; Preza, C.

    2016-03-01

    Surpassing the resolution of optical microscopy defined by the Abbe diffraction limit, while simultaneously achieving optical sectioning, is a challenging problem particularly for live cell imaging of thick samples. Among a few developing techniques, structured illumination microscopy (SIM) addresses this challenge by imposing higher frequency information into the observable frequency band confined by the optical transfer function (OTF) of a conventional microscope either doubling the spatial resolution or filling the missing cone based on the spatial frequency of the pattern when the patterned illumination is two-dimensional. Standard reconstruction methods for SIM decompose the low and high frequency components from the recorded low-resolution images and then combine them to reach a high-resolution image. In contrast, model-based approaches rely on iterative optimization approaches to minimize the error between estimated and forward images. In this paper, we study the performance of both groups of methods by simulating fluorescence microscopy images from different type of objects (ranging from simulated two-point sources to extended objects). These simulations are used to investigate the methods' effectiveness on restoring objects with various types of power spectrum when modulation frequency of the patterned illumination is changing from zero to the incoherent cut-off frequency of the imaging system. Our results show that increasing the amount of imposed information by using a higher modulation frequency of the illumination pattern does not always yield a better restoration performance, which was found to be depended on the underlying object. Results from model-based restoration show performance improvement, quantified by an up to 62% drop in the mean square error compared to standard reconstruction, with increasing modulation frequency. However, we found cases for which results obtained with standard reconstruction methods do not always follow the same trend.

  12. The Close-Up Imager Onboard the ESA ExoMars Rover: Objectives, Description, Operations, and Science Validation Activities.

    PubMed

    Josset, Jean-Luc; Westall, Frances; Hofmann, Beda A; Spray, John; Cockell, Charles; Kempe, Stephan; Griffiths, Andrew D; De Sanctis, Maria Cristina; Colangeli, Luigi; Koschny, Detlef; Föllmi, Karl; Verrecchia, Eric; Diamond, Larryn; Josset, Marie; Javaux, Emmanuelle J; Esposito, Francesca; Gunn, Matthew; Souchon-Leitner, Audrey L; Bontognali, Tomaso R R; Korablev, Oleg; Erkman, Suren; Paar, Gerhard; Ulamec, Stephan; Foucher, Frédéric; Martin, Philippe; Verhaeghe, Antoine; Tanevski, Mitko; Vago, Jorge L

    The Close-Up Imager (CLUPI) onboard the ESA ExoMars Rover is a powerful high-resolution color camera specifically designed for close-up observations. Its accommodation on the movable drill allows multiple positioning. The science objectives of the instrument are geological characterization of rocks in terms of texture, structure, and color and the search for potential morphological biosignatures. We present the CLUPI science objectives, performance, and technical description, followed by a description of the instrument's planned operations strategy during the mission on Mars. CLUPI will contribute to the rover mission by surveying the geological environment, acquiring close-up images of outcrops, observing the drilling area, inspecting the top portion of the drill borehole (and deposited fines), monitoring drilling operations, and imaging samples collected by the drill. A status of the current development and planned science validation activities is also given. Key Words: Mars-Biosignatures-Planetary Instrumentation. Astrobiology 17, 595-611.

  13. The use of neural networks and texture analysis for rapid objective selection of regions of interest in cytoskeletal images.

    PubMed

    Derkacs, Amanda D Felder; Ward, Samuel R; Lieber, Richard L

    2012-02-01

    Understanding cytoskeletal dynamics in living tissue is prerequisite to understanding mechanisms of injury, mechanotransduction, and mechanical signaling. Real-time visualization is now possible using transfection with plasmids that encode fluorescent cytoskeletal proteins. Using this approach with the muscle-specific intermediate filament protein desmin, we found that a green fluorescent protein-desmin chimeric protein was unevenly distributed throughout the muscle fiber, resulting in some image areas that were saturated as well as others that lacked any signal. Our goal was to analyze the muscle fiber cytoskeletal network quantitatively in an unbiased fashion. To objectively select areas of the muscle fiber that are suitable for analysis, we devised a method that provides objective classification of regions of images of striated cytoskeletal structures into "usable" and "unusable" categories. This method consists of a combination of spatial analysis of the image using Fourier methods along with a boosted neural network that "decides" on the quality of the image based on previous training. We trained the neural network using the expert opinion of three scientists familiar with these types of images. We found that this method was over 300 times faster than manual classification and that it permitted objective and accurate classification of image regions.

  14. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention is one of important factors for crosstalk assessment due to the fact that when viewing 3D contents, perceptual salient regions are highly likely to be a major contributor to determining the quality of experience of 3D contents. To take this into account, perceptual significant regions are extracted, and a spatial pooling technique is used to combine structural distortion map, depth map and visual salience map together to predict the perceived crosstalk more precisely. To verify the performance of the proposed crosstalk assessment metric, subjective experiments are conducted with 24 participants viewing and rating 60 simuli (5 scenes * 4 crosstalk levels * 3 camera distances). After an outliers removal and statistical process, the correlation with subjective test is examined using Pearson and Spearman rank-order correlation coefficient. Furthermore, the proposed method is also compared with two traditional 2D metrics, PSNR and SSIM. The objective score is mapped to subjective scale using a nonlinear fitting function to directly evaluate the performance of the metric. RESULIS: After the above-mentioned processes, the evaluation results demonstrate that the proposed metric is highly correlated with the subjective score when compared with the existing approaches. Because the Pearson coefficient of the proposed metric is 90.3%, it is promising for objective evaluation of the perceived crosstalk. NOVELTY: The main goal of our paper is to introduce an objective metric for stereo crosstalk assessment. The novelty contributions are twofold. First, an appropriate simulation of crosstalk by considering the characteristics of patterned retarder 3D display is developed. Second, an objective crosstalk metric based on visual attention model is introduced.

  15. Galaxy evolution in the densest environments: HST imaging

    NASA Astrophysics Data System (ADS)

    Jorgensen, Inger

    2013-10-01

    We propose to process in a consistent fashion all available HST/ACS and WFC3 imaging of seven rich clusters of galaxies at z=1.2-1.6. The clusters are part of our larger project aimed at constraining models for galaxy evolution in dense environments from observations of stellar populations in rich z=1.2-2 galaxy clusters. The main objective is to establish the star formation {SF} history and structural evolution over this epoch during which large changes in SF rates and galaxy structure are expected to take place in cluster galaxies.The observational data required to meet our main objective are deep HST imaging and high S/N spectroscopy of individual cluster members. The HST imaging already exists for the seven rich clusters at z=1.2-1.6 included in this archive proposal. However, the data have not been consistently processed to derive colors, magnitudes, sizes and morphological parameters for all potential cluster members bright enough to be suitable for spectroscopic observations with 8-m class telescopes. We propose to carry out this processing and make all derived parameters publicly available. We will use the parameters derived from the HST imaging to {1} study the structural evolution of the galaxies, {2} select clusters and galaxies for spectroscopic observations, and {3} use the photometry and spectroscopy together for a unified analysis aimed at the SF history and structural changes. The analysis will also utilize data from the Gemini/HST Cluster Galaxy Project, which covers rich clusters at z=0.2-1.0 and for which we have similar HST imaging and high S/N spectroscopy available.

  16. Subcellular object quantification with Squassh3C and SquasshAnalyst.

    PubMed

    Rizk, Aurélien; Mansouri, Maysam; Ballmer-Hofer, Kurt; Berger, Philipp

    2015-11-01

    Quantitative image analysis plays an important role in contemporary biomedical research. Squassh is a method for automatic detection, segmentation, and quantification of subcellular structures and analysis of their colocalization. Here we present the applications Squassh3C and SquasshAnalyst. Squassh3C extends the functionality of Squassh to three fluorescence channels and live-cell movie analysis. SquasshAnalyst is an interactive web interface for the analysis of Squassh3C object data. It provides segmentation image overview and data exploration, figure generation, object and image filtering, and a statistical significance test in an easy-to-use interface. The overall procedure combines the Squassh3C plug-in for the free biological image processing program ImageJ and a web application working in conjunction with the free statistical environment R, and it is compatible with Linux, MacOS X, or Microsoft Windows. Squassh3C and SquasshAnalyst are available for download at www.psi.ch/lbr/SquasshAnalystEN/SquasshAnalyst.zip.

  17. Interferometric phase-contrast X-ray CT imaging of VX2 rabbit cancer at 35keV X-ray energy

    NASA Astrophysics Data System (ADS)

    Takeda, Tohoru; Wu, Jin; Tsuchiya, Yoshinori; Yoneyama, Akio; Lwin, Thet-Thet; Hyodo, Kazuyuki; Itai, Yuji

    2004-05-01

    Imaging of large objects at 17.7-keV low x-ray energy causes huge x-ray exposure to the objects even using interferometric phase-contrast x-ray CT (PCCT). Thus, we tried to obtain PCCT images at high x-ray energy of 35keV and examined the image quality using a formalin-fixed VX2 rabbit cancer specimen with 15-mm in diameter. The PCCT system consisted of an asymmetrically cut silicon (220) crystal, a monolithic x-ray interferometer, a phase-shifter, an object cell and an x-ray CCD camera. The PCCT at 35 keV clearly visualized various inner structures of VX2 rabbit cancer such as necrosis, cancer, the surrounding tumor vessels, and normal liver tissue. Besides, image-contrast was not degraded significantly. These results suggest that the PCCT at 35 KeV is sufficient to clearly depict the histopathological morphology of VX2 rabbit cancer specimen.

  18. Integrating DICOM structure reporting (SR) into the medical imaging informatics data grid

    NASA Astrophysics Data System (ADS)

    Lee, Jasper; Le, Anh; Liu, Brent

    2008-03-01

    The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory enables medical images to be shared securely between multiple imaging centers. Current applications include an imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR objects based on the imaging-based clinical trial application. The services required to extract and categorize information from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing MI2 Data Grid will be shown.

  19. 3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform

    NASA Astrophysics Data System (ADS)

    Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul

    2018-03-01

    This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.

  20. Combining Landform Thematic Layer and Object-Oriented Image Analysis to Map the Surface Features of Mountainous Flood Plain Areas

    NASA Astrophysics Data System (ADS)

    Chuang, H.-K.; Lin, M.-L.; Huang, W.-C.

    2012-04-01

    The Typhoon Morakot on August 2009 brought more than 2,000 mm of cumulative rainfall in southern Taiwan, the extreme rainfall event caused serious damage to the Kaoping River basin. The losses were mostly blamed on the landslides along sides of the river, and shifting of the watercourse even led to the failure of roads and bridges, as well as flooding and levees damage happened around the villages on flood bank and terraces. Alluvial fans resulted from debris flow of stream feeders blocked the main watercourse and debris dam was even formed and collapsed. These disasters have highlighted the importance of identification and map the watercourse alteration, surface features of flood plain area and artificial structures soon after the catastrophic typhoon event for natural hazard mitigation. Interpretation of remote sensing images is an efficient approach to acquire spatial information for vast areas, therefore making it suitable for the differentiation of terrain and objects near the vast flood plain areas in a short term. The object-oriented image analysis program (Definiens Developer 7.0) and multi-band high resolution satellite images (QuickBird, DigitalGlobe) was utilized to interpret the flood plain features from Liouguei to Baolai of the the Kaoping River basin after Typhoon Morakot. Object-oriented image interpretation is the process of using homogenized image blocks as elements instead of pixels for different shapes, textures and the mutual relationships of adjacent elements, as well as categorized conditions and rules for semi-artificial interpretation of surface features. Digital terrain models (DTM) are also employed along with the above process to produce layers with specific "landform thematic layers". These layers are especially helpful in differentiating some confusing categories in the spectrum analysis with improved accuracy, such as landslides and riverbeds, as well as terraces, riverbanks, which are of significant engineering importance in disaster mitigation. In this study, an automatic and fast image interpretation process for eight surface features including main channel, secondary channel, sandbar, flood plain, river terrace, alluvial fan, landslide, and the nearby artificial structures in the mountainous flood plain is proposed. Images along timelines can even be compared in order to differentiate historical events such as village inundations, failure of roads, bridges and levees, as well as alternation of watercourse, and therefore can be used as references for safety evaluation of engineering structures near rivers, disaster prevention and mitigation, and even future land-use planning. Keywords: Flood plain area, Remote sensing, Object-oriented, Surface feature interpretation, Terrain analysis, Thematic layer, Typhoon Morakot

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Kalpagam; Liu, Jeff; Kohli, Kirpal

    Purpose: Fusion of electrical impedance tomography (EIT) with computed tomography (CT) can be useful as a clinical tool for providing additional physiological information about tissues, but requires suitable fusion algorithms and validation procedures. This work explores the feasibility of fusing EIT and CT images using an algorithm for coregistration. The imaging performance is validated through feature space assessment on phantom contrast targets. Methods: EIT data were acquired by scanning a phantom using a circuit, configured for injecting current through 16 electrodes, placed around the phantom. A conductivity image of the phantom was obtained from the data using electrical impedance andmore » diffuse optical tomography reconstruction software (EIDORS). A CT image of the phantom was also acquired. The EIT and CT images were fused using a region of interest (ROI) coregistration fusion algorithm. Phantom imaging experiments were carried out on objects of different contrasts, sizes, and positions. The conductive medium of the phantoms was made of a tissue-mimicking bolus material that is routinely used in clinical radiation therapy settings. To validate the imaging performance in detecting different contrasts, the ROI of the phantom was filled with distilled water and normal saline. Spatially separated cylindrical objects of different sizes were used for validating the imaging performance in multiple target detection. Analyses of the CT, EIT and the EIT/CT phantom images were carried out based on the variations of contrast, correlation, energy, and homogeneity, using a gray level co-occurrence matrix (GLCM). A reference image of the phantom was simulated using EIDORS, and the performances of the CT and EIT imaging systems were evaluated and compared against the performance of the EIT/CT system using various feature metrics, detectability, and structural similarity index measures. Results: In detecting distilled and normal saline water in bolus medium, EIT as a stand-alone imaging system showed contrast discrimination of 47%, while the CT imaging system showed a discrimination of only 1.5%. The structural similarity index measure showed a drop of 24% with EIT imaging compared to CT imaging. The average detectability measure for CT imaging was found to be 2.375 ± 0.19 before fusion. After complementing with EIT information, the detectability measure increased to 11.06 ± 2.04. Based on the feature metrics, the functional imaging quality of CT and EIT were found to be 2.29% and 86%, respectively, before fusion. Structural imaging quality was found to be 66% for CT and 16% for EIT. After fusion, functional imaging quality improved in CT imaging from 2.29% to 42% and the structural imaging quality of EIT imaging changed from 16% to 66%. The improvement in image quality was also observed in detecting objects of different sizes. Conclusions: The authors found a significant improvement in the contrast detectability performance of CT imaging when complemented with functional imaging information from EIT. Along with the feature assessment metrics, the concept of complementing CT with EIT imaging can lead to an EIT/CT imaging modality which might fully utilize the functional imaging abilities of EIT imaging, thereby enhancing the quality of care in the areas of cancer diagnosis and radiotherapy treatment planning.« less

  2. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  3. Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.

    PubMed

    Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin

    2009-01-01

    Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.

  4. Imaging of four planetary nebulae in the Magellanic Clouds using the Hubble Space Telescope Faint Object Camera

    NASA Technical Reports Server (NTRS)

    Blades, J. C.; Barlow, M. J.; Albrecht, R.; Barbieri, C.; Boksenberg, A.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.; Kamperman, T. M.

    1992-01-01

    Using the Faint Object Camera on-board the Hubble Space Telescope, we have obtained images of four planetary nebulae (PNe) in the Magellanic Clouds, namely N2 and N5 in the SMC and N66 and N201 in the LMC. Each nebula was imaged through two narrow-band filters isolating forbidden O III 5007 and H-beta, for a nominal exposure time of 1000 s in each filter. In forbidden O III, SMC N5 shows a circular ring structure, with a peak-to-peak diameter of 0.26 arcsec and a FWHM of 0.35 arcsec while SMC N2 shows an elliptical ring structure with a peak-to-peak diameter of 0.26 x 0.21. The expansion ages corresponding to the observed structures in SMC N2 and N5 are of the order of 3000 yr. LMC N201 is very compact, with a FWHM of 0.2 arcsec in H-beta. The Type I PN LMC N66 is a multipolar nebula, with the brightest part having an extent of about 2 arcsec and with fainter structures extending over 4 arcsec.

  5. Transforming Clinical Imaging Data for Virtual Reality Learning Objects

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Rosset, Antoine

    2008-01-01

    Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…

  6. Use of Image Based Modelling for Documentation of Intricately Shaped Objects

    NASA Astrophysics Data System (ADS)

    Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O.

    2016-06-01

    In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  7. Serial grouping of 2D-image regions with object-based attention in humans.

    PubMed

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-06-13

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.

  8. Technique of semiautomatic surface reconstruction of the visible Korean human data using commercial software.

    PubMed

    Park, Jin Seo; Shin, Dong Sun; Chung, Min Suk; Hwang, Sung Bae; Chung, Jinoh

    2007-11-01

    This article describes the technique of semiautomatic surface reconstruction of anatomic structures using widely available commercial software. This technique would enable researchers to promptly and objectively perform surface reconstruction, creating three-dimensional anatomic images without any assistance from computer engineers. To develop the technique, we used data from the Visible Korean Human project, which produced digitalized photographic serial images of an entire cadaver. We selected 114 anatomic structures (skin [1], bones [32], knee joint structures [7], muscles [60], arteries [7], and nerves [7]) from the 976 anatomic images which were generated from the left lower limb of the cadaver. Using Adobe Photoshop, the selected anatomic structures in each serial image were outlined, creating a segmented image. The Photoshop files were then converted into Adobe Illustrator files to prepare isolated segmented images, so that the contours of the structure could be viewed independent of the surrounding anatomy. Using Alias Maya, these isolated segmented images were then stacked to construct a contour image. Gaps between the contour lines were filled with surfaces, and three-dimensional surface reconstruction could be visualized with Rhinoceros. Surface imperfections were then corrected to complete the three-dimensional images in Alias Maya. We believe that the three-dimensional anatomic images created by these methods will have widespread application in both medical education and research. 2007 Wiley-Liss, Inc

  9. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  10. Combined X-ray CT and mass spectrometry for biomedical imaging applications

    NASA Astrophysics Data System (ADS)

    Schioppa, E., Jr.; Ellis, S.; Bruinen, A. L.; Visser, J.; Heeren, R. M. A.; Uher, J.; Koffeman, E.

    2014-04-01

    Imaging technologies play a key role in many branches of science, especially in biology and medicine. They provide an invaluable insight into both internal structure and processes within a broad range of samples. There are many techniques that allow one to obtain images of an object. Different techniques are based on the analysis of a particular sample property by means of a dedicated imaging system, and as such, each imaging modality provides the researcher with different information. The use of multimodal imaging (imaging with several different techniques) can provide additional and complementary information that is not possible when employing a single imaging technique alone. In this study, we present for the first time a multi-modal imaging technique where X-ray computerized tomography (CT) is combined with mass spectrometry imaging (MSI). While X-ray CT provides 3-dimensional information regarding the internal structure of the sample based on X-ray absorption coefficients, MSI of thin sections acquired from the same sample allows the spatial distribution of many elements/molecules, each distinguished by its unique mass-to-charge ratio (m/z), to be determined within a single measurement and with a spatial resolution as low as 1 μm or even less. The aim of the work is to demonstrate how molecular information from MSI can be spatially correlated with 3D structural information acquired from X-ray CT. In these experiments, frozen samples are imaged in an X-ray CT setup using Medipix based detectors equipped with a CO2 cooled sample holder. Single projections are pre-processed before tomographic reconstruction using a signal-to-thickness calibration. In the second step, the object is sliced into thin sections (circa 20 μm) that are then imaged using both matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) and secondary ion (SIMS) mass spectrometry, where the spatial distribution of specific molecules within the sample is determined. The combination of two vastly different imaging approaches provides complementary information (i.e., anatomical and molecular distributions) that allows the correlation of distinct structural features with specific molecules distributions leading to unique insights in disease development.

  11. Structure-adaptive CBCT reconstruction using weighted total variation and Hessian penalties

    PubMed Central

    Shi, Qi; Sun, Nanbo; Sun, Tao; Wang, Jing; Tan, Shan

    2016-01-01

    The exposure of normal tissues to high radiation during cone-beam CT (CBCT) imaging increases the risk of cancer and genetic defects. Statistical iterative algorithms with the total variation (TV) penalty have been widely used for low dose CBCT reconstruction, with state-of-the-art performance in suppressing noise and preserving edges. However, TV is a first-order penalty and sometimes leads to the so-called staircase effect, particularly over regions with smooth intensity transition in the reconstruction images. A second-order penalty known as the Hessian penalty was recently used to replace TV to suppress the staircase effect in CBCT reconstruction at the cost of slightly blurring object edges. In this study, we proposed a new penalty, the TV-H, which combines TV and Hessian penalties for CBCT reconstruction in a structure-adaptive way. The TV-H penalty automatically differentiates the edges, gradual transition and uniform local regions within an image using the voxel gradient, and adaptively weights TV and Hessian according to the local image structures in the reconstruction process. Our proposed penalty retains the benefits of TV, including noise suppression and edge preservation. It also maintains the structures in regions with gradual intensity transition more successfully. A majorization-minimization (MM) approach was designed to optimize the objective energy function constructed with the TV-H penalty. The MM approach employed a quadratic upper bound of the original objective function, and the original optimization problem was changed to a series of quadratic optimization problems, which could be efficiently solved using the Gauss-Seidel update strategy. We tested the reconstruction algorithm on two simulated digital phantoms and two physical phantoms. Our experiments indicated that the TV-H penalty visually and quantitatively outperformed both TV and Hessian penalties. PMID:27699100

  12. Structure-adaptive CBCT reconstruction using weighted total variation and Hessian penalties.

    PubMed

    Shi, Qi; Sun, Nanbo; Sun, Tao; Wang, Jing; Tan, Shan

    2016-09-01

    The exposure of normal tissues to high radiation during cone-beam CT (CBCT) imaging increases the risk of cancer and genetic defects. Statistical iterative algorithms with the total variation (TV) penalty have been widely used for low dose CBCT reconstruction, with state-of-the-art performance in suppressing noise and preserving edges. However, TV is a first-order penalty and sometimes leads to the so-called staircase effect, particularly over regions with smooth intensity transition in the reconstruction images. A second-order penalty known as the Hessian penalty was recently used to replace TV to suppress the staircase effect in CBCT reconstruction at the cost of slightly blurring object edges. In this study, we proposed a new penalty, the TV-H, which combines TV and Hessian penalties for CBCT reconstruction in a structure-adaptive way. The TV-H penalty automatically differentiates the edges, gradual transition and uniform local regions within an image using the voxel gradient, and adaptively weights TV and Hessian according to the local image structures in the reconstruction process. Our proposed penalty retains the benefits of TV, including noise suppression and edge preservation. It also maintains the structures in regions with gradual intensity transition more successfully. A majorization-minimization (MM) approach was designed to optimize the objective energy function constructed with the TV-H penalty. The MM approach employed a quadratic upper bound of the original objective function, and the original optimization problem was changed to a series of quadratic optimization problems, which could be efficiently solved using the Gauss-Seidel update strategy. We tested the reconstruction algorithm on two simulated digital phantoms and two physical phantoms. Our experiments indicated that the TV-H penalty visually and quantitatively outperformed both TV and Hessian penalties.

  13. A MegaCam Survey of Outer Halo Satellites. III. Photometric and Structural Parameters

    NASA Astrophysics Data System (ADS)

    Muñoz, Ricardo R.; Côté, Patrick; Santana, Felipe A.; Geha, Marla; Simon, Joshua D.; Oyarzún, Grecco A.; Stetson, Peter B.; Djorgovski, S. G.

    2018-06-01

    We present structural parameters from a wide-field homogeneous imaging survey of Milky Way satellites carried out with the MegaCam imagers on the 3.6 m Canada–France–Hawaii Telescope and 6.5 m Magellan-Clay telescope. Our survey targets an unbiased sample of “outer halo” satellites (i.e., substructures having galactocentric distances greater than 25 kpc) and includes classical dSph galaxies, ultra-faint dwarfs, and remote globular clusters. We combine deep, panoramic gr imaging for 44 satellites and archival gr imaging for 14 additional objects (primarily obtained with the DECam instrument as part of the Dark Energy Survey) to measure photometric and structural parameters for 58 outer halo satellites. This is the largest and most uniform analysis of Milky Way satellites undertaken to date and represents roughly three-quarters (58/81 ≃ 72%) of all known outer halo satellites. We use a maximum-likelihood method to fit four density laws to each object in our survey: exponential, Plummer, King, and Sérsic models. We systematically examine the isodensity contour maps and color–magnitude diagrams for each of our program objects, present a comparison with previous results, and tabulate our best-fit photometric and structural parameters, including ellipticities, position angles, effective radii, Sérsic indices, absolute magnitudes, and surface brightness measurements. We investigate the distribution of outer halo satellites in the size–magnitude diagram and show that the current sample of outer halo substructures spans a wide range in effective radius, luminosity, and surface brightness, with little evidence for a clean separation into star cluster and galaxy populations at the faintest luminosities and surface brightnesses.

  14. Modeling global scene factors in attention

    NASA Astrophysics Data System (ADS)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  15. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  16. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images

    PubMed Central

    Hou, Bin; Wang, Yunhong; Liu, Qingjie

    2016-01-01

    Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation. PMID:27618903

  17. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images.

    PubMed

    Hou, Bin; Wang, Yunhong; Liu, Qingjie

    2016-08-27

    Characterizations of up to date information of the Earth's surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.

  18. Nanosensitive optical coherence tomography for the study of changes in static and dynamic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, S; Subhash, H; Leahy, M

    2014-07-31

    We briefly discuss the principle of image formation in Fourier domain optical coherence tomography (OCT). The theory of a new approach to improve dramatically the sensitivity of conventional OCT is described. The approach is based on spectral encoding of spatial frequency. Information about the spatial structure is directly translated from the Fourier domain to the image domain as different wavelengths, without compromising the accuracy. Axial spatial period profiles of the structure are reconstructed for any volume of interest within the 3D OCT image with nanoscale sensitivity. An example of application of the nanoscale OCT to probe the internal structure ofmore » medico-biological objects, the anterior chamber of an ex vivo rat eye, is demonstrated. (laser biophotonics)« less

  19. Remote Sensing Image Analysis Without Expert Knowledge - A Web-Based Classification Tool On Top of Taverna Workflow Management System

    NASA Astrophysics Data System (ADS)

    Selsam, Peter; Schwartze, Christian

    2016-10-01

    Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.

  20. GRAVITATIONAL LENS CAPTURES IMAGE OF PRIMEVAL GALAXY

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This Hubble Space Telescope image shows several blue, loop-shaped objects that actually are multiple images of the same galaxy. They have been duplicated by the gravitational lens of the cluster of yellow, elliptical and spiral galaxies - called 0024+1654 - near the photograph's center. The gravitational lens is produced by the cluster's tremendous gravitational field that bends light to magnify, brighten and distort the image of a more distant object. How distorted the image becomes and how many copies are made depends on the alignment between the foreground cluster and the more distant galaxy, which is behind the cluster. In this photograph, light from the distant galaxy bends as it passes through the cluster, dividing the galaxy into five separate images. One image is near the center of the photograph; the others are at 6, 7, 8, and 2 o'clock. The light also has distorted the galaxy's image from a normal spiral shape into a more arc-shaped object. Astronomers are certain the blue-shaped objects are copies of the same galaxy because the shapes are similar. The cluster is 5 billion light-years away in the constellation Pisces, and the blue-shaped galaxy is about 2 times farther away. Though the gravitational light-bending process is not new, Hubble's high resolution image reveals structures within the blue-shaped galaxy that astronomers have never seen before. Some of the structures are as small as 300 light-years across. The bits of white imbedded in the blue galaxy represent young stars; the dark core inside the ring is dust, the material used to make stars. This information, together with the blue color and unusual 'lumpy' appearance, suggests a young, star-making galaxy. The picture was taken October 14, 1994 with the Wide Field Planetary Camera-2. Separate exposures in blue and red wavelengths were taken to construct this color picture. CREDIT: W.N. Colley and E. Turner (Princeton University), J.A. Tyson (Bell Labs, Lucent Technologies) and NASA Image files in GIF and JPEG format and captions may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo.

  1. Humanoid monocular stereo measuring system with two degrees of freedom using bionic optical imaging system

    NASA Astrophysics Data System (ADS)

    Du, Jia-Wei; Wang, Xuan-Yin; Zhu, Shi-Qiang

    2017-10-01

    Based on the process by which the spatial depth clue is obtained by a single eye, a monocular stereo vision to measure the depth information of spatial objects was proposed in this paper and a humanoid monocular stereo measuring system with two degrees of freedom was demonstrated. The proposed system can effectively obtain the three-dimensional (3-D) structure of spatial objects of different distances without changing the position of the system and has the advantages of being exquisite, smart, and flexible. The bionic optical imaging system we proposed in a previous paper, named ZJU SY-I, was employed and its vision characteristic was just like the resolution decay of the eye's vision from center to periphery. We simplified the eye's rotation in the eye socket and the coordinated rotation of other organs of the body into two rotations in the orthogonal direction and employed a rotating platform with two rotation degrees of freedom to drive ZJU SY-I. The structure of the proposed system was described in detail. The depth of a single feature point on the spatial object was deduced, as well as its spatial coordination. With the focal length adjustment of ZJU SY-I and the rotation control of the rotation platform, the spatial coordinates of all feature points on the spatial object could be obtained and then the 3-D structure of the spatial object could be reconstructed. The 3-D structure measurement experiments of two spatial objects with different distances and sizes were conducted. Some main factors affecting the measurement accuracy of the proposed system were analyzed and discussed.

  2. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  3. Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research

    DOE PAGES

    Ercius, Peter; Alaidi, Osama; Rames, Matthew J.; ...

    2015-06-18

    Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is amore » technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. Here, this review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. Electron tomography produces quantitative 3D reconstructions for biological and physical sciences from sets of 2D projections acquired at different tilting angles in a transmission electron microscope. Finally, state-of-the-art techniques capable of producing 3D representations such as Pt-Pd core-shell nanoparticles and IgG1 antibody molecules are reviewed.« less

  4. Depth information in natural environments derived from optic flow by insect motion detection system: a model analysis

    PubMed Central

    Schwegmann, Alexander; Lindemann, Jens P.; Egelhaaf, Martin

    2014-01-01

    Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e., the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way. PMID:25136314

  5. Measurement of the noise components in the medical x-ray intensity pattern due to overlaying nonrecognizable structures

    NASA Astrophysics Data System (ADS)

    Tischenko, Oleg; Hoeschen, Christoph; Effenberger, Olaf; Reissberg, Steffen; Buhr, Egbert; Doehring, Wilfried

    2003-06-01

    There are many aspects that influence and deteriorate the detection of pathologies in X-ray images. Some of those are due to effects taking place in the stage of forming the X-ray intensity pattern in front of the x-ray detector. These can be described as motion blurring, depth blurring, anatomical background, scatter noise and structural noise. Structural noise results from an overlapping of fine irrelevant anatomical structures. A method for measuring the combined effect of structural noise and scatter noise was developed and will be presented in this paper. This method is based on the consideration that within a pair of projections created after rotation of the object with a small angle (which is within the typical uncertainty in positioning the patient) both images would show the same relevant structures whereas the projection of the fine overlapping structures will appear quite differently in the two images. To demonstrate the method two X-ray radiographs of a lung phantom were produced. The second radiograph was achieved after rotating the lung by an angle of about 3. Dyadic wavelet representations of both images were regarded. For each value of the wavelet scale parameter the corresponding pair of approximations was matched using the cross correlation matching technique. The homologous regions of approximations were extracted. The image containing only those structures that appear in both images simultaneously was then reconstructed from the wavelet coefficients corresponding to the homologous regions. The difference between one of the original images and the noise-reduced image contains the structural noise and the scatter noise.

  6. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  7. Photo-multiplier Tube Based Hybrid MRI and Frequency Domain Fluorescence Tomography System for Small Animal Imaging

    PubMed Central

    Lin, Y; Ghijsen, M T; Gao, H; Liu, N; Nalcioglu, O; Gulsen, G

    2014-01-01

    Fluorescence tomography (FT) is a promising molecular imaging technique that can spatially resolve both fluorophore concentration and lifetime parameters. However, recovered fluorophore parameters highly depend on the size and depth of the object due to the ill-posedness of the FT inverse problem. Structural a priori information from another high spatial resolution imaging modality has been demonstrated to significantly improve FT reconstruction accuracy. In this study, we have constructed a combined magnetic resonance imaging (MRI) and FT system for small animal imaging. A photo-multiplier tube (PMT) is used as the detector to acquire frequency domain FT measurements. This is the first MR-compatible time-resolved FT system that can reconstruct both fluorescence concentration and lifetime maps simultaneously. The performance of the hybrid system is evaluated with phantom studies. Two different fluorophores, Indocyanine Green (ICG) and 3-3′ Diethylthiatricarbocyanine Iodide (DTTCI), which have similar excitation and emission spectra but different lifetimes, are utilized. The fluorescence concentration and lifetime maps are both reconstructed with and without the structural a priori information obtained from MRI for comparison. We show that the hybrid system can accurately recover both fluorescence intensity and lifetime within 10% error for two 4.2 mm-diameter cylindrical objects embedded in a 38 mm-diameter cylindrical phantom when MRI structural a priori information is utilized. PMID:21753235

  8. The perception of geometrical structure from congruence

    NASA Technical Reports Server (NTRS)

    Lappin, Joseph S.; Wason, Thomas D.

    1989-01-01

    The principle function of vision is to measure the environment. As demonstrated by the coordination of motor actions with the positions and trajectories of moving objects in cluttered environments and by rapid recognition of solid objects in varying contexts from changing perspectives, vision provides real-time information about the geometrical structure and location of environmental objects and events. The geometric information provided by 2-D spatial displays is examined. It is proposed that the geometry of this information is best understood not within the traditional framework of perspective trigonometry, but in terms of the structure of qualitative relations defined by congruences among intrinsic geometric relations in images of surfaces. The basic concepts of this geometrical theory are outlined.

  9. Compton imaging tomography for nondestructive evaluation of large multilayer aircraft components and structures

    NASA Astrophysics Data System (ADS)

    Romanov, Volodymyr; Grubsky, Victor; Zahiri, Feraidoon

    2017-02-01

    We present a novel NDT/NDE tool for non-contact, single-sided 3D inspection of aerospace components, based on Compton Imaging Tomography (CIT) technique, which is applicable to large, non-uniform, and/or multilayer structures made of composites or lightweight metals. CIT is based on the registration of Compton-scattered X-rays, and permits the reconstruction of the full 3D (tomographic) image of the inspected objects. Unlike conventional computerized tomography (CT), CIT requires only single-sided access to objects, and therefore can be applied to large structures without their disassembly. The developed tool provides accurate detection, identification, and precise 3D localizations and measurements of any possible internal and surface defects (corrosions, cracks, voids, delaminations, porosity, and inclusions), and also disbonds, core and skin defects, and intrusion of foreign fluids (e.g., fresh and salt water, oil) inside of honeycomb sandwich structures. The NDE capabilities of the system were successfully demonstrated on various aerospace structure samples provided by several major aerospace companies. Such a CIT-based tool can detect and localize individual internal defects with dimensions about 1-2 mm3, and honeycomb disbond defects less than 6 mm by 6 mm area with the variations in the thickness of the adhesive by 100 m. Current maximum scanning speed of aircraft/spacecraft structures is about 5-8 min/ft2 (50-80 min/m2).

  10. The optical lens coupled X-ray in-line phase contrast imaging system for the characterization of low Z materials

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Lin, Wei; Dai, Fei; Li, Jun; Qi, Xiaobo; Lei, Haile; Liu, Yuanqiong

    2018-05-01

    Due to the high spatial resolution and contrast, the optical lens coupled X-ray in-line phase contrast imaging system with the secondary optical magnification is more suitable for the characterization of the low Z materials. The influence of the source to object distance and the object to scintillator distance on the image resolution and contrast is studied experimentally. A phase correlation algorithm is used for the image mosaic of a serial of X-ray phase contrast images acquired with high resolution, the resulting resolution is less than 1.0 μm, and the whole field of view is larger than 1.4 mm. Finally, the geometric morphology and the inner structure of various weakly absorbing samples and the evaporation of water in the plastic micro-shell are in situ characterized by the optical lens coupled X-ray in-line phase contrast imaging system.

  11. General imaging of advanced 3D mask objects based on the fully-vectorial extended Nijboer-Zernike (ENZ) theory

    NASA Astrophysics Data System (ADS)

    van Haver, Sven; Janssen, Olaf T. A.; Braat, Joseph J. M.; Janssen, Augustus J. E. M.; Urbach, H. Paul; Pereira, Silvania F.

    2008-03-01

    In this paper we introduce a new mask imaging algorithm that is based on the source point integration method (or Abbe method). The method presented here distinguishes itself from existing methods by exploiting the through-focus imaging feature of the Extended Nijboer-Zernike (ENZ) theory of diffraction. An introduction to ENZ-theory and its application in general imaging is provided after which we describe the mask imaging scheme that can be derived from it. The remainder of the paper is devoted to illustrating the advantages of the new method over existing methods (Hopkins-based). To this extent several simulation results are included that illustrate advantages arising from: the accurate incorporation of isolated structures, the rigorous treatment of the object (mask topography) and the fully vectorial through-focus image formation of the ENZ-based algorithm.

  12. Salient object detection: manifold-based similarity adaptation approach

    NASA Astrophysics Data System (ADS)

    Zhou, Jingbo; Ren, Yongfeng; Yan, Yunyang; Gao, Shangbing

    2014-11-01

    A saliency detection algorithm based on manifold-based similarity adaptation is proposed. The proposed algorithm is divided into three steps. First, we segment an input image into superpixels, which are represented as the nodes in a graph. Second, a new similarity measurement is used in the proposed algorithm. The weight matrix of the graph, which indicates the similarities between the nodes, uses a similarity-based method. It also captures the manifold structure of the image patches, in which the graph edges are determined in a data adaptive manner in terms of both similarity and manifold structure. Then, we use local reconstruction method as a diffusion method to obtain the saliency maps. The objective function in the proposed method is based on local reconstruction, with which estimated weights capture the manifold structure. Experiments on four bench-mark databases demonstrate the accuracy and robustness of the proposed method.

  13. Basic research planning in mathematical pattern recognition and image analysis

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  14. A knowledge-guided active model method of cortical structure segmentation on pediatric MR images.

    PubMed

    Shan, Zuyao Y; Parra, Carlos; Ji, Qing; Jain, Jinesh; Reddick, Wilburn E

    2006-10-01

    To develop an automated method for quantification of cortical structures on pediatric MR images. A knowledge-guided active model (KAM) approach was proposed with a novel object function similar to the Gibbs free energy function. Triangular mesh models were transformed to images of a given subject by maximizing entropy, and then actively slithered to boundaries of structures by minimizing enthalpy. Volumetric results and image similarities of 10 different cortical structures segmented by KAM were compared with those traced manually. Furthermore, the segmentation performances of KAM and SPM2, (statistical parametric mapping, a MATLAB software package) were compared. The averaged volumetric agreements between KAM- and manually-defined structures (both 0.95 for structures in healthy children and children with medulloblastoma) were higher than the volumetric agreement for SPM2 (0.90 and 0.80, respectively). The similarity measurements (kappa) between KAM- and manually-defined structures (0.95 and 0.93, respectively) were higher than those for SPM2 (both 0.86). We have developed a novel automatic algorithm, KAM, for segmentation of cortical structures on MR images of pediatric patients. Our preliminary results indicated that when segmenting cortical structures, KAM was in better agreement with manually-delineated structures than SPM2. KAM can potentially be used to segment cortical structures for conformal radiation therapy planning and for quantitative evaluation of changes in disease or abnormality. Copyright (c) 2006 Wiley-Liss, Inc.

  15. Three-dimensional imaging of nanoscale materials by using coherent x-rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Jianwei

    X-ray crystallography is currently the primary methodology used to determine the 3D structure of materials and macromolecules. However, many nanostructures, disordered materials, biomaterials, hybrid materials and biological specimens are noncrystalline and, hence, their structures are not accessible by X-ray crystallography. Probing these structures therefore requires the employment of different approaches. A very promising technique currently under rapid development is X-ray diffraction microscopy (or lensless imaging), in which the coherent X-ray diffraction pattern of a noncrystalline specimen is measured and then directly phased to obtain a high-resolution image. Through the DOE support over the past three years, we have applied X-raymore » diffraction microscopy to quantitative imaging of GaN quantum dot particles, and revealed the internal GaN-Ga2O3 core shell structure in three dimensions. By exploiting the abrupt change in the scattering cross-section near electronic resonances, we carried out the first experimental demonstration of resonant X-ray diffraction microscopy for element specific imaging. We performed nondestructive and quantitative imaging of buried Bi structures inside a Si crystal by directly phasing coherent X-ray diffraction patterns acquired below and above the Bi M5 edge. We have also applied X-ray diffraction microscopy to nondestructive imaging of mineral crystals inside biological composite materials - intramuscular fish bone - at the nanometer scale resolution. We identified mineral crystals in collagen fibrils at different stages of mineralization and proposed a dynamic mechanism to account for the nucleation and growth of mineral crystals in the collagen matrix. In addition, we have also discovered a novel 3D imaging modality, denoted ankylography, which allows for complete 3D structure determination without the necessity of sample titling or scanning. We showed that when the diffraction pattern of a finite object is sampled at a sufficiently fine scale on the Ewald sphere, the 3D structure of the object is determined by the 2D spherical pattern. We confirmed the theoretical analysis by performing 3D numerical reconstructions of a sodium silicate glass structure at 2 A resolution from a 2D spherical diffraction pattern alone. As X-ray free electron lasers are under rapid development worldwide, ankylography may open up a new horizon to obtain the 3D structure of a non-crystalline specimen from a single pulse and allow time-resolved 3D structure determination of disordered materials.« less

  16. Local structure preserving sparse coding for infrared target recognition

    PubMed Central

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2017-01-01

    Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions. PMID:28323824

  17. High-resolution electron microscopy and its applications.

    PubMed

    Li, F H

    1987-12-01

    A review of research on high-resolution electron microscopy (HREM) carried out at the Institute of Physics, the Chinese Academy of Sciences, is presented. Apart from the direct observation of crystal and quasicrystal defects for some alloys, oxides, minerals, etc., and the structure determination for some minute crystals, an approximate image-contrast theory named pseudo-weak-phase object approximation (PWPOA), which shows the image contrast change with crystal thickness, is described. Within the framework of PWPOA, the image contrast of lithium ions in the crystal of R-Li2Ti3O7 has been observed. The usefulness of diffraction analysis techniques such as the direct method and Patterson method in HREM is discussed. Image deconvolution and resolution enhancement for weak-phase objects by use of the direct method are illustrated. In addition, preliminary results of image restoration for thick crystals are given.

  18. Astronomical image data compression by morphological skeleton transformation

    NASA Astrophysics Data System (ADS)

    Huang, L.; Bijaoui, A.

    A compression method adapted for exact restoring of the detected objects and based on the morphological skeleton transformation is presented. The morphological skeleton provides a complete and compact description of an object and gives an efficient compression rate. The flexibility of choosing a structuring element adapted to different images and the simplicity of the implementation are considered to be advantages of the method. The experiment was carried out on three typical astronomical images. The first two images were obtained by digitizing a Palomar Schmidt photographic plate in a coma field with the PDS microdensitometer at Nice Observatory. The third image was obtained by CCD camera at the Pic du Midi Observatory. Each pixel was coded by 16 bits and stored at a computer system (VAX785) with STII format. Each image is characterized by 256 x 256 pixels. It is found that first image represents a stellar field, the second represents a set of galaxies in the Coma, and the third image contains an elliptical galaxy.

  19. A Review of Digital Image Correlation Applied to Structura Dynamics

    NASA Astrophysics Data System (ADS)

    Niezrecki, Christopher; Avitabile, Peter; Warren, Christopher; Pingle, Pawan; Helfrick, Mark

    2010-05-01

    A significant amount of interest exists in performing non-contacting, full-field surface velocity measurement. For many years traditional non-contacting surface velocity measurements have been made by using scanning Doppler laser vibrometry, shearography, pulsed laser interferometry, pulsed holography, or an electronic speckle pattern interferometer (ESPI). Three dimensional (3D) digital image correlation (DIC) methods utilize the alignment of a stereo pair of images to obtain full-field geometry data, in three dimensions. Information about the change in geometry of an object over time can be found by comparing a sequence of images and virtual strain gages (or position sensors) can be created over the entire visible surface of the object of interest. Digital imaging techniques were first developed in the 1980s but the technology has only recently been exploited in industry and research due to the advances of digital cameras and personal computers. The use of DIC for structural dynamic measurement has only very recently been investigated. Within this paper, the advantages and limits of using DIC for dynamic measurement are reviewed. Several examples of using DIC for dynamic measurement are presented on several vibrating and rotating structures.

  20. Syntactic methods of shape feature description and its application in analysis of medical images

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Tadeusiewicz, Ryszard

    2000-02-01

    The paper presents specialist algorithms of morphologic analysis of shapes of selected organs of abdominal cavity proposed in order to diagnose disease symptoms occurring in the main pancreatic ducts and upper segments of ureters. Analysis of the correct morphology of these structures has been conducted with the use of syntactic methods of pattern recognition. Its main objective is computer-aided support to early diagnosis of neoplastic lesions and pancreatitis based on images taken in the course of examination with the endoscopic retrograde cholangiopancreatography (ERCP) method and a diagnosis of morphological lesions in ureter based on kidney radiogram analysis. In the analysis of ERCP images, the main objective is to recognize morphological lesions in pancreas ducts characteristic for carcinoma and chronic pancreatitis. In the case of kidney radiogram analysis the aim is to diagnose local irregularity of ureter lumen. Diagnosing the above mentioned lesion has been conducted with the use of syntactic methods of pattern recognition, in particular the languages of shape features description and context-free attributed grammars. These methods allow to recognize and describe in a very efficient way the aforementioned lesions on images obtained as a result of initial image processing into diagrams of widths of the examined structures.

  1. Motion and Structure Estimation of Manoeuvring Objects in Multiple- Camera Image Sequences

    DTIC Science & Technology

    1992-11-01

    and Speckert [23], Gennery [24], Hallman [25], Legters and Young [26], Stuller and Krishnamurthy [27], Wu et al. [381, Matthies, Kanade, and Szeliski...26] G.R. Legters , T.Y. Young, "A mathematical model for computer image track- ing," IEEE Transactions on Pattern Analysis and Machine Intelligence

  2. Seeing Spots and Developing Multiplicative Sense Making

    ERIC Educational Resources Information Center

    Matney, Gabriel T.; Daugherty, Brooke N.

    2013-01-01

    Dot arrays provide opportunities for students to notice structures like commutativity and distributivity, giving these properties an image that can be manipulated and explored. These images also connect to ways that we organize discrete objects in everyday life. This article describes how the authors developed an array of dot tasks that have been…

  3. Viewing Artworks: Contributions of Cognitive Control and Perceptual Facilitation to Aesthetic Experience

    ERIC Educational Resources Information Center

    Cupchik, Gerald C.; Vartanian, Oshin; Crawley, Adrian; Mikulis, David J.

    2009-01-01

    When we view visual images in everyday life, our perception is oriented toward object identification. In contrast, when viewing visual images "as artworks", we also tend to experience subjective reactions to their stylistic and structural properties. This experiment sought to determine how cognitive control and perceptual facilitation contribute…

  4. Efficient threshold for volumetric segmentation

    NASA Astrophysics Data System (ADS)

    Burdescu, Dumitru D.; Brezovan, Marius; Stanescu, Liana; Stoica Spahiu, Cosmin; Ebanca, Daniel

    2015-07-01

    Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.

  5. Mapping accuracy via spectrally and structurally based filtering techniques: comparisons through visual observations

    NASA Astrophysics Data System (ADS)

    Chockalingam, Letchumanan

    2005-01-01

    The data of Gunung Ledang region of Malaysia acquired through LANDSAT are considered to map certain hydrogeolocial features. To map these significant features, image-processing tools such as contrast enhancement, edge detection techniques are employed. The advantages of these techniques over the other methods are evaluated from the point of their validity in properly isolating features of hydrogeolocial interest are discussed. As these techniques take the advantage of spectral aspects of the images, these techniques have several limitations to meet the objectives. To discuss these limitations, a morphological transformation, which generally considers the structural aspects rather than spectral aspects from the image, are applied to provide comparisons between the results derived from spectral based and the structural based filtering techniques.

  6. Mapping gray-scale image to 3D surface scanning data by ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jones, Peter R. M.

    1997-03-01

    The extraction and location of feature points from range imaging is an important but difficult task in machine vision based measurement systems. There exist some feature points which are not able to be detected from pure geometric characteristics, particularly in those measurement tasks related to the human body. The Loughborough Anthropometric Shadow Scanner (LASS) is a whole body surface scanner based on structured light technique. Certain applications of LASS require accurate location of anthropometric landmarks from the scanned data. This is sometimes impossible from existing raw data because some landmarks do not appear in the scanned data. Identification of these landmarks has to resort to surface texture of the scanned object. Modifications to LASS were made to allow gray-scale images to be captured before or after the object was scanned. Two-dimensional gray-scale image must be mapped to the scanned data to acquire the 3D coordinates of a landmark. The method to map 2D images to the scanned data is based on the colinearity conditions and ray-tracing method. If the camera center and image coordinates are known, the corresponding object point must lie on a ray starting from the camera center and connecting to the image coordinate. By intersecting the ray with the scanned surface of the object, the 3D coordinates of a point can be solved. Experimentation has demonstrated the feasibility of the method.

  7. A simple prescription for simulating and characterizing gravitational arcs

    NASA Astrophysics Data System (ADS)

    Furlanetto, C.; Santiago, B. X.; Makler, M.; de Bom, C.; Brandt, C. H.; Neto, A. F.; Ferreira, P. C.; da Costa, L. N.; Maia, M. A. G.

    2013-01-01

    Simple models of gravitational arcs are crucial for simulating large samples of these objects with full control of the input parameters. These models also provide approximate and automated estimates of the shape and structure of the arcs, which are necessary for detecting and characterizing these objects on massive wide-area imaging surveys. We here present and explore the ArcEllipse, a simple prescription for creating objects with a shape similar to gravitational arcs. We also present PaintArcs, which is a code that couples this geometrical form with a brightness distribution and adds the resulting object to images. Finally, we introduce ArcFitting, which is a tool that fits ArcEllipses to images of real gravitational arcs. We validate this fitting technique using simulated arcs and apply it to CFHTLS and HST images of tangential arcs around clusters of galaxies. Our simple ArcEllipse model for the arc, associated to a Sérsic profile for the source, recovers the total signal in real images typically within 10%-30%. The ArcEllipse+Sérsic models also automatically recover visual estimates of length-to-width ratios of real arcs. Residual maps between data and model images reveal the incidence of arc substructure. They may thus be used as a diagnostic for arcs formed by the merging of multiple images. The incidence of these substructures is the main factor that prevents ArcEllipse models from accurately describing real lensed systems.

  8. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  9. 4D Hyperspherical Harmonic (HyperSPHARM) Representation of Surface Anatomy: A Holistic Treatment of Multiple Disconnected Anatomical Structures

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Koay, Cheng Guan; Schaefer, Stacey M.; van Reekum, Carien M.; Schmitz, Lara Peschke; Sutterer, Matt; Alexander, Andrew L.; Davidson, Richard J.

    2015-01-01

    Image-based parcellation of the brain often leads to multiple disconnected anatomical structures, which pose significant challenges for analyses of morphological shapes. Existing shape models, such as the widely used spherical harmonic (SPHARM) representation, assume topological invariance, so are unable to simultaneously parameterize multiple disjoint structures. In such a situation, SPHARM has to be applied separately to each individual structure. We present a novel surface parameterization technique using 4D hyperspherical harmonics in representing multiple disjoint objects as a single analytic function, terming it HyperSPHARM. The underlying idea behind Hyper-SPHARM is to stereographically project an entire collection of disjoint 3D objects onto the 4D hypersphere and subsequently simultaneously parameterize them with the 4D hyperspherical harmonics. Hence, HyperSPHARM allows for a holistic treatment of multiple disjoint objects, unlike SPHARM. In an imaging dataset of healthy adult human brains, we apply HyperSPHARM to the hippocampi and amygdalae. The HyperSPHARM representations are employed as a data smoothing technique, while the HyperSPHARM coefficients are utilized in a support vector machine setting for object classification. HyperSPHARM yields nearly identical results as SPHARM, as will be shown in the paper. Its key advantage over SPHARM lies computationally; Hyper-SPHARM possess greater computational efficiency than SPHARM because it can parameterize multiple disjoint structures using much fewer basis functions and stereographic projection obviates SPHARM's burdensome surface flattening. In addition, HyperSPHARM can handle any type of topology, unlike SPHARM, whose analysis is confined to topologically invariant structures. PMID:25828650

  10. An Analysis and Classification of Dying AGB Stars Transitioning to Pre-Planetary Nebulae

    NASA Technical Reports Server (NTRS)

    Blake, Adam C.

    2011-01-01

    The principal objective of the project is to understand part of the life and death process of a star. During the end of a star's life, it expels its mass at a very rapid rate. We want to understand how these Asymptotic Giant Branch (AGB) stars begin forming asymmetric structures as they start evolving towards the planetary nebula phase and why planetary nebulae show a very large variety of non-round geometrical shapes. To do this, we analyzed images of just-forming pre-planetary nebula from Hubble surveys. These images were run through various image correction processes like saturation correction and cosmic ray removal using in-house software to bring out the circumstellar structure. We classified the visible structure based on qualitative data such as lobe, waist, halo, and other structures. Radial and azimuthal intensity cuts were extracted from the images to quantitatively examine the circumstellar structure and measure departures from the smooth spherical outflow expected during most of the AGB mass-loss phase. By understanding the asymmetrical structure, we hope to understand the mechanisms that drive this stellar evolution.

  11. Cone beam computed tomography in the diagnosis of dental disease.

    PubMed

    Tetradis, Sotirios; Anstey, Paul; Graff-Radford, Steven

    2011-07-01

    Conventional radiographs provide important information for dental disease diagnosis. However, they represent 2-D images of 3-D objects with significant structure superimposition and unpredictable magnification. Cone beam computed tomography, however, allows true 3-D visualization of the dentoalveolar structures, avoiding major limitations of conventional radiographs. Cone beam computed tomography images offer great advantages in disease detection for selected patients. The authors discuss cone beam computed tomography applications in dental disease diagnosis, reviewing the pertinent literature when available.

  12. Cytology 3D structure formation based on optical microscopy images

    NASA Astrophysics Data System (ADS)

    Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.

  13. Synthesis of Systemic Functional Theory & Dynamical Systems Theory for Socio-Cultural Modeling

    DTIC Science & Technology

    2011-01-26

    is, language and other resources (e.g. images and sound resources) are conceptualised as inter-locking systems of meaning which realise four...hierarchical ranks and strata (e.g. sounds, word groups, clauses, and complex discourse structures in language, and elements, figures and episodes in images ...integrating platform for describing how language and other resources (e.g. images and sound) work together to fulfil particular objectives. While

  14. Microenvironments and Signaling Pathways Regulating Early Dissemination, Dormancy, and Metastasis

    DTIC Science & Technology

    2016-09-01

    all these cell types in all tissues and we have used intravital imaging to document intravasation in early cancer lesions (see also partnering PI...report we showed how we optimized a mammary gland imaging window to perform intravital imaging and detect P-TMEM function during early stages of...MECs) assemble primary Tumor Microenvironment of Metastasis structures (P-TMEM) during early dissemination. SA1.1. Objective: Use intravital

  15. Method for radiometric calibration of an endoscope's camera and light source

    NASA Astrophysics Data System (ADS)

    Rai, Lav; Higgins, William E.

    2008-03-01

    An endoscope is a commonly used instrument for performing minimally invasive visual examination of the tissues inside the body. A physician uses the endoscopic video images to identify tissue abnormalities. The images, however, are highly dependent on the optical properties of the endoscope and its orientation and location with respect to the tissue structure. The analysis of endoscopic video images is, therefore, purely subjective. Studies suggest that the fusion of endoscopic video images (providing color and texture information) with virtual endoscopic views (providing structural information) can be useful for assessing various pathologies for several applications: (1) surgical simulation, training, and pedagogy; (2) the creation of a database for pathologies; and (3) the building of patient-specific models. Such fusion requires both geometric and radiometric alignment of endoscopic video images in the texture space. Inconsistent estimates of texture/color of the tissue surface result in seams when multiple endoscopic video images are combined together. This paper (1) identifies the endoscope-dependent variables to be calibrated for objective and consistent estimation of surface texture/color and (2) presents an integrated set of methods to measure them. Results show that the calibration method can be successfully used to estimate objective color/texture values for simple planar scenes, whereas uncalibrated endoscopes performed very poorly for the same tests.

  16. Indexing and retrieving point and region objects

    NASA Astrophysics Data System (ADS)

    Ibrahim, Azzam T.; Fotouhi, Farshad A.

    1996-03-01

    R-tree and its variants are examples of spatial data structures for paged-secondary memory. To process a query, these structures require multiple path traversals. In this paper, we present a new image access method, SB+-tree which requires a single path traversal to process a query. Also, SB+-tree will allow commercial databases an access method for spatial objects without a major change, since most commercial databases already support B+-tree as an access method for text data. The SB+-tree can be used for zero and non-zero size data objects. Non-zero size objects are approximated by their minimum bounding rectangles (MBRs). The number of SB+-trees generated is dependent upon the number of dimensions of the approximation of the object. The structure supports efficient spatial operations such as regions-overlap, distance and direction. In this paper, we experimentally and analytically demonstrate the superiority of SB+-tree over R-tree.

  17. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  18. Image quality affected by diffraction of aperture structure arrangement in transparent active-matrix organic light-emitting diode displays.

    PubMed

    Tsai, Yu-Hsiang; Huang, Mao-Hsiu; Jeng, Wei-de; Huang, Ting-Wei; Lo, Kuo-Lung; Ou-Yang, Mang

    2015-10-01

    Transparent display is one of the main technologies in next-generation displays, especially for augmented reality applications. An aperture structure is attached on each display pixel to partition them into transparent and black regions. However, diffraction blurs caused by the aperture structure typically degrade the transparent image when the light from a background object passes through finite aperture window. In this paper, the diffraction effect of an active-matrix organic light-emitting diode display (AMOLED) is studied. Several aperture structures have been proposed and implemented. Based on theoretical analysis and simulation, the appropriate aperture structure will effectively reduce the blur. The analysis data are also consistent with the experimental results. Compared with the various transparent aperture structure on AMOLED, diffraction width (zero energy position of diffraction pattern) of the optimize aperture structure can be reduced 63% and 31% in the x and y directions in CASE 3. Associated with a lenticular lens on the aperture structure, the improvement could reach to 77% and 54% of diffraction width in the x and y directions. Modulation transfer function and practical images are provided to evaluate the improvement of image blurs.

  19. Quantitative pathology in virtual microscopy: history, applications, perspectives.

    PubMed

    Kayser, Gian; Kayser, Klaus

    2013-07-01

    With the emerging success of commercially available personal computers and the rapid progress in the development of information technologies, morphometric analyses of static histological images have been introduced to improve our understanding of the biology of diseases such as cancer. First applications have been quantifications of immunohistochemical expression patterns. In addition to object counting and feature extraction, laws of thermodynamics have been applied in morphometric calculations termed syntactic structure analysis. Here, one has to consider that the information of an image can be calculated for separate hierarchical layers such as single pixels, cluster of pixels, segmented small objects, clusters of small objects, objects of higher order composed of several small objects. Using syntactic structure analysis in histological images, functional states can be extracted and efficiency of labor in tissues can be quantified. Image standardization procedures, such as shading correction and color normalization, can overcome artifacts blurring clear thresholds. Morphometric techniques are not only useful to learn more about biological features of growth patterns, they can also be helpful in routine diagnostic pathology. In such cases, entropy calculations are applied in analogy to theoretical considerations concerning information content. Thus, regions with high information content can automatically be highlighted. Analysis of the "regions of high diagnostic value" can deliver in the context of clinical information, site of involvement and patient data (e.g. age, sex), support in histopathological differential diagnoses. It can be expected that quantitative virtual microscopy will open new possibilities for automated histological support. Automated integrated quantification of histological slides also serves for quality assurance. The development and theoretical background of morphometric analyses in histopathology are reviewed, as well as their application and potential future implementation in virtual microscopy. Copyright © 2012 Elsevier GmbH. All rights reserved.

  20. Investigation of Local Ordering in Amorphous Materials.

    NASA Astrophysics Data System (ADS)

    Fan, Gary Guoyou

    The intent of the research described in this dissertation, as indicated by the title, is to provide a better understanding of the structure of amorphous material. The possibility of using electron microscopy to study the amorphous structure is investigated. Chapter 1 gives a brief introduction to the understanding and modeling of the amorphous structure, electron microscopy and the image analysis in general. The difficulty of using 2-D images to infer 3-D structures information is illustrated in Chapter 2, where it is shown that some high resolution images are not qualitatively different from images of white -noises weak-phase objects or those of random atomic arrangements. The means of obtaining statistical information from these images is given in Chapters 3 and 5, where the quantitative differences between experimental images and simulated white-noise or simulated images corresponding to random arrangements are revealed. The use of image processing techniques in electron microscopy and the possible artifacts are presented in Chapter 4. The pattern recognition technique outlined in Chapter 6 demonstrates a feasible mode of scanning transition electron microscope operation. Statistical analysis can be effectively performed on a large number of nano-diffraction patterns from, for example, locally ordered samples. Some recent developments in physics as well as in electron microscopy are briefly reviewed, and their possible applications in the study of amorphous structures are discussed in Chapter 7.

  1. The cognitive structural approach for image restoration

    NASA Astrophysics Data System (ADS)

    Mardare, Igor; Perju, Veacheslav; Casasent, David

    2008-03-01

    It is analyzed the important and actual problem of the defective images of scenes restoration. The proposed approach provides restoration of scenes by a system on the basis of human intelligence phenomena reproduction used for restoration-recognition of images. The cognitive models of the restoration process are elaborated. The models are realized by the intellectual processors constructed on the base of neural networks and associative memory using neural network simulator NNToolbox from MATLAB 7.0. The models provides restoration and semantic designing of images of scenes under defective images of the separate objects.

  2. Design of microcamera for field curvature and distortion correction in monocentric multiscale foveated imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Xiongxiong; Wang, Xiaorui; Zhang, Jianlei; Yuan, Ying; Chen, Xiaoxiang

    2017-04-01

    To realize large field of view (FOV) and high-resolution dynamic gaze of the moving target, this paper proposes the monocentric multiscale foveated (MMF) imaging system based on monocentric multiscale design and foveated imaging. First we present the MMF imaging system concept. Then we analyze large field curvature and distortion of the secondary image when the spherical intermediate image produced by the primary monocentric objective lens is relayed by the microcameras. Further a type of zoom endoscope objective lens is selected as the initial structure and optimized to minimize the field curvature and distortion with ZEMAX optical design software. The simulation results show that the maximum field curvature in full field of view is below 0.25 mm and the maximum distortion in full field of view is below 0.6%, which can meet the requirements of the microcamera in the proposed MMF imaging system. In addition, a simple doublet is used to design the foveated imaging system. Results of the microcamera together with the foveated imager compose the results of the whole MMF imaging system.

  3. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  4. Photogrammetric Analysis of Historical Image Repositories for Virtual Reconstruction in the Field of Digital Humanities

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Vietze, T.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2017-02-01

    Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi-) automated photogrammetric reconstruction methods for heterogeneous collections of historical (city) photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate") of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM). Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.

  5. A scale-based connected coherence tree algorithm for image segmentation.

    PubMed

    Ding, Jundi; Ma, Runing; Chen, Songcan

    2008-02-01

    This paper presents a connected coherence tree algorithm (CCTA) for image segmentation with no prior knowledge. It aims to find regions of semantic coherence based on the proposed epsilon-neighbor coherence segmentation criterion. More specifically, with an adaptive spatial scale and an appropriate intensity-difference scale, CCTA often achieves several sets of coherent neighboring pixels which maximize the probability of being a single image content (including kinds of complex backgrounds). In practice, each set of coherent neighboring pixels corresponds to a coherence class (CC). The fact that each CC just contains a single equivalence class (EC) ensures the separability of an arbitrary image theoretically. In addition, the resultant CCs are represented by tree-based data structures, named connected coherence tree (CCT)s. In this sense, CCTA is a graph-based image analysis algorithm, which expresses three advantages: 1) its fundamental idea, epsilon-neighbor coherence segmentation criterion, is easy to interpret and comprehend; 2) it is efficient due to a linear computational complexity in the number of image pixels; 3) both subjective comparisons and objective evaluation have shown that it is effective for the tasks of semantic object segmentation and figure-ground separation in a wide variety of images. Those images either contain tiny, long and thin objects or are severely degraded by noise, uneven lighting, occlusion, poor illumination, and shadow.

  6. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  7. Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector.

    PubMed

    Stantchev, Rayko Ivanov; Sun, Baoqing; Hornett, Sam M; Hobson, Peter A; Gibson, Graham M; Padgett, Miles J; Hendry, Euan

    2016-06-01

    Terahertz (THz) imaging can see through otherwise opaque materials. However, because of the long wavelengths of THz radiation (λ = 400 μm at 0.75 THz), far-field THz imaging techniques suffer from low resolution compared to visible wavelengths. We demonstrate noninvasive, near-field THz imaging with subwavelength resolution. We project a time-varying, intense (>100 μJ/cm(2)) optical pattern onto a silicon wafer, which spatially modulates the transmission of synchronous pulse of THz radiation. An unknown object is placed on the hidden side of the silicon, and the far-field THz transmission corresponding to each mask is recorded by a single-element detector. Knowledge of the patterns and of the corresponding detector signal are combined to give an image of the object. Using this technique, we image a printed circuit board on the underside of a 115-μm-thick silicon wafer with ~100-μm (λ/4) resolution. With subwavelength resolution and the inherent sensitivity to local conductivity, it is possible to detect fissures in the circuitry wiring of a few micrometers in size. THz imaging systems of this type will have other uses too, where noninvasive measurement or imaging of concealed structures is necessary, such as in semiconductor manufacturing or in ex vivo bioimaging.

  8. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  9. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    NASA Astrophysics Data System (ADS)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  10. Multimedia explorer: image database, image proxy-server and search-engine.

    PubMed Central

    Frankewitsch, T.; Prokosch, U.

    1999-01-01

    Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463

  11. Multimedia explorer: image database, image proxy-server and search-engine.

    PubMed

    Frankewitsch, T; Prokosch, U

    1999-01-01

    Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.

  12. 3D Power Line Extraction from Multiple Aerial Images.

    PubMed

    Oh, Jaehong; Lee, Changno

    2017-09-29

    Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters.

  13. 3D Power Line Extraction from Multiple Aerial Images

    PubMed Central

    Lee, Changno

    2017-01-01

    Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters. PMID:28961204

  14. Thermal Images of Seeds Obtained at Different Depths by Photoacoustic Microscopy (PAM)

    NASA Astrophysics Data System (ADS)

    Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2015-06-01

    The objective of the present study was to obtain thermal images of a broccoli seed ( Brassica oleracea) by photoacoustic microscopy, at different modulation frequencies of the incident light beam ((0.5, 1, 5, and 20) Hz). The thermal images obtained in the amplitude of the photoacoustic signal vary with each applied frequency. In the lowest light frequency modulation, there is greater thermal wave penetration in the sample. Likewise, the photoacoustic signal is modified according to the structural characteristics of the sample and the modulation frequency of the incident light. Different structural components could be seen by photothermal techniques, as shown in the present study.

  15. Robust Feature Matching in Terrestrial Image Sequences

    NASA Astrophysics Data System (ADS)

    Abbas, A.; Ghuffar, S.

    2018-04-01

    From the last decade, the feature detection, description and matching techniques are most commonly exploited in various photogrammetric and computer vision applications, which includes: 3D reconstruction of scenes, image stitching for panoramic creation, image classification, or object recognition etc. However, in terrestrial imagery of urban scenes contains various issues, which include duplicate and identical structures (i.e. repeated windows and doors) that cause the problem in feature matching phase and ultimately lead to failure of results specially in case of camera pose and scene structure estimation. In this paper, we will address the issue related to ambiguous feature matching in urban environment due to repeating patterns.

  16. Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.

    PubMed

    Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue

    2015-01-01

    A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.

  17. Design and implementation of a biomedical image database (BDIM).

    PubMed

    Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R

    1988-01-01

    We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Adsorption of O_{2} on Ag(111): Evidence of Local Oxide Formation.

    PubMed

    Andryushechkin, B V; Shevlyuga, V M; Pavlova, T V; Zhidomirov, G M; Eltsov, K N

    2016-07-29

    The atomic structure of the disordered phase formed by oxygen on Ag(111) at low coverage is determined by a combination of low-temperature scanning tunneling microscopy and density functional theory. We demonstrate that the previous assignment of the dark objects in STM to chemisorbed oxygen atoms is incorrect and incompatible with trefoil-like structures observed in atomic-resolution images in current work. In our model, each object is an oxidelike ring formed by six oxygen atoms around the vacancy in Ag(111).

  19. [An object-oriented remote sensing image segmentation approach based on edge detection].

    PubMed

    Tan, Yu-Min; Huai, Jian-Zhu; Tang, Zhong-Shi

    2010-06-01

    Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing imagery, an object-oriented algorithm combining edge detection and region merging is proposed. Susan edge filter is firstly applied to the panchromatic band of Quickbird imagery with spatial resolution of 0.61 m to obtain the edge map. Thanks to the resulting edge map, a two-phrase region-based segmentation method operates on the fusion image from panchromatic and multispectral Quickbird images to get the final partition result. In the first phase, a quad tree grid consisting of squares with sides parallel to the image left and top borders agglomerates the square subsets recursively where the uniform measure is satisfied to derive image object primitives. Before the merger of the second phrase, the contextual and spatial information, (e. g., neighbor relationship, boundary coding) of the resulting squares are retrieved efficiently by means of the quad tree structure. Then a region merging operation is performed with those primitives, during which the criterion for region merging integrates edge map and region-based features. This approach has been tested on the QuickBird images of some site in Sanxia area and the result is compared with those of ENVI Zoom Definiens. In addition, quantitative evaluation of the quality of segmentation results is also presented. Experiment results demonstrate stable convergence and efficiency.

  20. IPET and FETR: Experimental Approach for Studying Molecular Structure Dynamics by Cryo-Electron Tomography of a Single-Molecule Structure

    PubMed Central

    Zhang, Lei; Ren, Gang

    2012-01-01

    The dynamic personalities and structural heterogeneity of proteins are essential for proper functioning. Structural determination of dynamic/heterogeneous proteins is limited by conventional approaches of X-ray and electron microscopy (EM) of single-particle reconstruction that require an average from thousands to millions different molecules. Cryo-electron tomography (cryoET) is an approach to determine three-dimensional (3D) reconstruction of a single and unique biological object such as bacteria and cells, by imaging the object from a series of tilting angles. However, cconventional reconstruction methods use large-size whole-micrographs that are limited by reconstruction resolution (lower than 20 Å), especially for small and low-symmetric molecule (<400 kDa). In this study, we demonstrated the adverse effects from image distortion and the measuring tilt-errors (including tilt-axis and tilt-angle errors) both play a major role in limiting the reconstruction resolution. Therefore, we developed a “focused electron tomography reconstruction” (FETR) algorithm to improve the resolution by decreasing the reconstructing image size so that it contains only a single-instance protein. FETR can tolerate certain levels of image-distortion and measuring tilt-errors, and can also precisely determine the translational parameters via an iterative refinement process that contains a series of automatically generated dynamic filters and masks. To describe this method, a set of simulated cryoET images was employed; to validate this approach, the real experimental images from negative-staining and cryoET were used. Since this approach can obtain the structure of a single-instance molecule/particle, we named it individual-particle electron tomography (IPET) as a new robust strategy/approach that does not require a pre-given initial model, class averaging of multiple molecules or an extended ordered lattice, but can tolerate small tilt-errors for high-resolution single “snapshot” molecule structure determination. Thus, FETR/IPET provides a completely new opportunity for a single-molecule structure determination, and could be used to study the dynamic character and equilibrium fluctuation of macromolecules. PMID:22291925

  1. Virus-resembling nano-structures for near infrared fluorescence imaging of ovarian cancer HER2 receptors

    NASA Astrophysics Data System (ADS)

    Guerrero, Yadir A.; Bahmani, Baharak; Singh, Sheela P.; Vullev, Valentine I.; Kundra, Vikas; Anvari, Bahman

    2015-10-01

    Ovarian cancer remains the dominant cause of death due to malignancies of the female reproductive system. The capability to identify and remove all tumors during intraoperative procedures may ultimately reduce cancer recurrence, and lead to increased patient survival. The objective of this study is to investigate the effectiveness of an optical nano-structured system for targeted near infrared (NIR) imaging of ovarian cancer cells that over-express the human epidermal growth factor receptor 2 (HER2), an important biomarker associated with ovarian cancer. The nano-structured system is comprised of genome-depleted plant-infecting brome mosaic virus doped with NIR chromophore, indocyanine green, and functionalized at the surface by covalent attachment of monoclonal antibodies against the HER2 receptor. We use absorption and fluorescence spectroscopy, and dynamic light scattering to characterize the physical properties of the constructs. Using fluorescence imaging and flow cytometry, we demonstrate the effectiveness of these nano-structures for targeted NIR imaging of HER2 receptors in vitro. These functionalized nano-materials may provide a platform for NIR imaging of ovarian cancer.

  2. Remote defect imaging for plate-like structures based on the scanning laser source technique

    NASA Astrophysics Data System (ADS)

    Hayashi, Takahiro; Maeda, Atsuya; Nakao, Shogo

    2018-04-01

    In defect imaging with a scanning laser source technique, the use of a fixed receiver realizes stable measurements of flexural waves generated by laser at multiple rastering points. This study discussed the defect imaging by remote measurements using a laser Doppler vibrometer as a receiver. Narrow-band burst waves were generated by modulating laser pulse trains of a fiber laser to enhance signal to noise ratio in frequency domain. Averaging three images obtained at three different frequencies suppressed spurious distributions due to resonance. The experimental system equipped with these newly-devised means enabled us to visualize defects and adhesive objects in plate-like structures such as a plate with complex geometries and a branch pipe.

  3. Object detection and imaging with acoustic time reversal mirrors

    NASA Astrophysics Data System (ADS)

    Fink, Mathias

    1993-11-01

    Focusing an acoustic wave on an object of unknown shape through an inhomogeneous medium of any geometrical shape is a challenge in underground detection. Optimal detection and imaging of objects needs the development of such focusing techniques. The use of a time reversal mirror (TRM) represents an original solution to this problem. It realizes in real time a focusing process matched to the object shape, to the geometries of the acoustic interfaces and to the geometries of the mirror. It is a self adaptative technique which compensates for any geometrical distortions of the mirror structure as well as for diffraction and refraction effects through the interfaces. Two real time 64 and 128 channel prototypes have been built in our laboratory and TRM experiments demonstrating the TRM performance through inhomogeneous solid and liquid media are presented. Applications to medical therapy (kidney stone detection and destruction) and to nondestructive testing of metallurgical samples of different geometries are described. Extension of this study to underground detection and imaging will be discussed.

  4. Phase object imaging inside the airy disc

    NASA Astrophysics Data System (ADS)

    Tychinsky, Vladimir P.

    1991-03-01

    The possibility of phase objects superresoluton imaging is theoretically justifieth The measurements with CPM " AIRYSCAN" showed the reality of O structures observations when the Airy disc di ameter i s 0 86 j. . m SUMMARY It has been known that the amount of information contained in the image of any object is mostly determined by the number of points measured i ndependentl y or by spati al resol uti on of the system. From the classic theory of the optical systems it follows that for noncoherent sources the -spatial resolution is limited by the aperture dd 6LX/N. A. ( Rayleigh criterion where X is wave length NA numerical aperture. ) The use of this criterion is equivalent tO the statement that any object inside the Airy disc of radius d that is the difraction image of a point is practical ly unresolved. However at the coherent illumination the intensity distribution in the image plane depends also upon the phase iq (r) of the wave scattered by the object and this is the basis of the Zernike method of phasecontrast microscopy differential interference contrast (DIC) and computer phase microscopy ( CPM ). In theoretical foundation of these methods there was no doubt in the correctness of Rayleigh criterion since the phase information is derived out of intensity distribution and as we know there were no experiments that disproved this

  5. Iterative metal artefact reduction in CT: can dedicated algorithms improve image quality after spinal instrumentation?

    PubMed

    Aissa, J; Thomas, C; Sawicki, L M; Caspers, J; Kröpil, P; Antoch, G; Boos, J

    2017-05-01

    To investigate the value of dedicated computed tomography (CT) iterative metal artefact reduction (iMAR) algorithms in patients after spinal instrumentation. Post-surgical spinal CT images of 24 patients performed between March 2015 and July 2016 were retrospectively included. Images were reconstructed with standard weighted filtered back projection (WFBP) and with two dedicated iMAR algorithms (iMAR-Algo1, adjusted to spinal instrumentations and iMAR-Algo2, adjusted to large metallic hip implants) using a medium smooth kernel (B30f) and a sharp kernel (B70f). Frequencies of density changes were quantified to assess objective image quality. Image quality was rated subjectively by evaluating the visibility of critical anatomical structures including the central canal, the spinal cord, neural foramina, and vertebral bone. Both iMAR algorithms significantly reduced artefacts from metal compared with WFBP (p<0.0001). Results of subjective image analysis showed that both iMAR algorithms led to an improvement in visualisation of soft-tissue structures (median iMAR-Algo1=3; interquartile range [IQR]:1.5-3; iMAR-Algo2=4; IQR: 3.5-4) and bone structures (iMAR-Algo1=3; IQR:3-4; iMAR-Algo2=4; IQR:4-5) compared to WFBP (soft tissue: median 2; IQR: 0.5-2 and bone structures: median 2; IQR: 1-3; p<0.0001). Compared with iMAR-Algo1, objective artefact reduction and subjective visualisation of soft-tissue and bone structures were improved with iMAR-Algo2 (p<0.0001). Both iMAR algorithms reduced artefacts compared with WFBP, however, the iMAR algorithm with dedicated settings for large metallic implants was superior to the algorithm specifically adjusted to spinal implants. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  6. HST imaging of quasi-stellar objects with WFPC2

    NASA Technical Reports Server (NTRS)

    Hutchings, J. B.; Holtzman, Jon; Sparks, W. B.; Morris, S. C.; Hanisch, R. J.; Mo, J.

    1994-01-01

    Early images were taken with the optically corrected WFPC2 camera of the Hubble Space Telescope of the low-redshift quasars(QSOs) 1229+204 and 2141+175, which are radio-quiet and radio-loud, respectively. We discuss image restoration on the data. The objects were chosen on the basis of structure seen with 0.5 sec resolution with the Canada-France-Hawaii-Telescope (CFHT) high-resolution camera (HRCAM). 1229+204 was known to be a barred spiral with an asymmetrical extra blue feature: this is now resolved into a ring of knots which are probably young stellar populations in the tidal debris of a small gas-rich companion. There are also shell-like structures along the bar. 2141+175 has a faint smooth curved tidal arm without knots which extends on both sides of a compact elliptical-shaped central galaxy. There is also a short jetlike feature emerging from the nucleus. We discuss the properties and implications of these morphological details.

  7. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  8. ACS Imaging of beta Pic: Searching for the origin of rings and asymmetry in planetesimal disks

    NASA Astrophysics Data System (ADS)

    Kalas, Paul

    2003-07-01

    The emerging picture for planetesimal disks around main sequence stars is that their radial and azimuthal symmetries are significantly deformed by the dynamical effects of either planets interior to the disk, or stellar objects exterior to the disk. The cause of these structures, such as the 50 AU cutoff of our Kuiper Belt, remains mysterious. Structure in the beta Pic planetesimal disk could be due to dynamics controlled by an extrasolar planet, or by the tidal influence of a more massive object exterior to the disk. The hypothesis of an extrasolar planet causing the vertical deformation in the disk predicts a blue color to the disk perpendicular to the disk midplane. The hypothesis that a stellar perturber deforms the disk predicts a globally uniform color and the existence of ring-like structure beyond 800 AU radius. We propose to obtain deep, multi-color images of the beta Pic disk ansae in the region 15"-220" {200-4000 AU} radius with the ACS WFC. The unparalleled stability of the HST PSF means that these data are uniquely capable of delivering the color sensitivity that can distinguish between the two theories of beta Pic's disk structure. Ascertaining the cause of such structure provide a meaningful context for understanding the dynamical history of our early solar system, as well as other planetesimal systems imaged around main sequence stars.

  9. Terrestrial scanning or digital images in inventory of monumental objects? - case study

    NASA Astrophysics Data System (ADS)

    Markiewicz, J. S.; Zawieska, D.

    2014-06-01

    Cultural heritage is the evidence of the past; monumental objects create the important part of the cultural heritage. Selection of a method to be applied depends on many factors, which include: the objectives of inventory, the object's volume, sumptuousness of architectural design, accessibility to the object, required terms and accuracy of works. The paper presents research and experimental works, which have been performed in the course of development of architectural documentation of elements of the external facades and interiors of the Wilanów Palace Museum in Warszawa. Point clouds, acquired from terrestrial laser scanning (Z+F 5003h) and digital images taken with Nikon D3X and Hasselblad H4D cameras were used. Advantages and disadvantages of utilisation of these technologies of measurements have been analysed with consideration of the influence of the structure and reflectance of investigated monumental surfaces on the quality of generation of photogrammetric products. The geometric quality of surfaces obtained from terrestrial laser scanning data and from point clouds resulting from digital images, have been compared.

  10. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang, E-mail: cyzhao@pmo.ac.cn

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and themore » astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.« less

  12. Optical sectioning in wide-field microscopy obtained by dynamic structured light illumination and detection based on a smart pixel detector array.

    PubMed

    Mitić, Jelena; Anhut, Tiemo; Meier, Matthias; Ducros, Mathieu; Serov, Alexander; Lasser, Theo

    2003-05-01

    Optical sectioning in wide-field microscopy is achieved by illumination of the object with a continuously moving single-spatial-frequency pattern and detecting the image with a smart pixel detector array. This detector performs an on-chip electronic signal processing that extracts the optically sectioned image. The optically sectioned image is directly observed in real time without any additional postprocessing.

  13. Imaging Modalities Relevant to Intracranial Pressure Assessment in Astronauts

    NASA Technical Reports Server (NTRS)

    Sargsyan, Ashot E.; Kramer, Larry A.; Hamilton, Douglas R.; Fogarty, Jennifer; Polk, J. D.

    2011-01-01

    Learning Objectives of this slide presentation are: 1: To review the morphological changes in orbit structures caused by elevated Intracranial Pressure (ICP), and their imaging representation. 2: To learn about the similarities and differences between MRI and sonographic imaging of the eye and orbit. 3: To learn about the role of MRI and sonography in the noninvasive assessment of intracranial pressure in aerospace medicine, and the added benefits from their combined interpretation.

  14. Study of imaging fiber bundle coupling technique in IR system

    NASA Astrophysics Data System (ADS)

    Chen, Guoqing; Yang, Jianfeng; Yan, Xingtao; Song, Yansong

    2017-02-01

    Due to its advantageous imaging characteristic and banding flexibility, imaging fiber bundle can be used for line-plane-switching push-broom infrared imaging. How to precisely couple the fiber bundle in the optics system is the key to get excellent image for transmission. After introducing the basic system composition and structural characteristics of the infrared systems coupled with imaging fiber bundle, this article analysis the coupling efficiency and the design requirements of its relay lenses with the angle of the numerical aperture selecting in the system and cold stop matching of the refrigerant infrared detector. For an actual need, one relay coupling system has been designed with the magnification is -0.6, field of objective height is 4mm, objective numerical aperture is 0.15, which has excellent image quality and enough coupling efficiency. In the end, the push broom imaging experiment is carried out. The results show that the design meets the requirements of light energy efficiency and image quality. This design has a certain reference value for the design of the infrared fiber optical system.

  15. Objective quality assessment of tone-mapped images.

    PubMed

    Yeganeh, Hojatollah; Wang, Zhou

    2013-02-01

    Tone-mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Different TMOs create different tone-mapped images, and a natural question is which one has the best quality. Without an appropriate quality measure, different TMOs cannot be compared, and further improvement is directionless. Subjective rating may be a reliable evaluation method, but it is expensive and time consuming, and more importantly, is difficult to be embedded into optimization frameworks. Here we propose an objective quality assessment algorithm for tone-mapped images by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure on the basis of intensity statistics of natural images. Validations using independent subject-rated image databases show good correlations between subjective ranking score and the proposed tone-mapped image quality index (TMQI). Furthermore, we demonstrate the extended applications of TMQI using two examples-parameter tuning for TMOs and adaptive fusion of multiple tone-mapped images.

  16. Boundary segmentation for fluorescence microscopy using steerable filters

    NASA Astrophysics Data System (ADS)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  17. Quantum imaging with incoherently scattered light from a free-electron laser

    NASA Astrophysics Data System (ADS)

    Schneider, Raimund; Mehringer, Thomas; Mercurio, Giuseppe; Wenthaus, Lukas; Classen, Anton; Brenner, Günter; Gorobtsov, Oleg; Benz, Adrian; Bhatti, Daniel; Bocklage, Lars; Fischer, Birgit; Lazarev, Sergey; Obukhov, Yuri; Schlage, Kai; Skopintsev, Petr; Wagner, Jochen; Waldmann, Felix; Willing, Svenja; Zaluzhnyy, Ivan; Wurth, Wilfried; Vartanyants, Ivan A.; Röhlsberger, Ralf; von Zanthier, Joachim

    2018-02-01

    The advent of accelerator-driven free-electron lasers (FEL) has opened new avenues for high-resolution structure determination via diffraction methods that go far beyond conventional X-ray crystallography methods. These techniques rely on coherent scattering processes that require the maintenance of first-order coherence of the radiation field throughout the imaging procedure. Here we show that higher-order degrees of coherence, displayed in the intensity correlations of incoherently scattered X-rays from an FEL, can be used to image two-dimensional objects with a spatial resolution close to or even below the Abbe limit. This constitutes a new approach towards structure determination based on incoherent processes, including fluorescence emission or wavefront distortions, generally considered detrimental for imaging applications. Our method is an extension of the landmark intensity correlation measurements of Hanbury Brown and Twiss to higher than second order, paving the way towards determination of structure and dynamics of matter in regimes where coherent imaging methods have intrinsic limitations.

  18. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    NASA Astrophysics Data System (ADS)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  19. Development of a methodology for structured reporting of information in echocardiography.

    PubMed

    Homorodean, Călin; Olinic, Maria; Olinic, Dan

    2012-03-01

    In order to conduct research relying on ultrasound images, it is necessary to access a large number of relevant cases represented by images and their interpretation. DICOM standard defines the structured reporting information object. Templates are tree-like structures which offer structural guidance in report construction. Laying the foundations of a structured reporting methodology in echocardiography, through the generation of a consistent set of DICOM templates. We developed an information system with the ability of managing echocardiographic images and structured reports. In order to perform a complete description of the cardiac structures, we used 1900 coded concepts organized into 344 contexts by their semantic meaning in a variety of cardiac diseases. We developed 30 templates, with up to 10 nesting levels. The list of templates has a pyramid-like architecture. Two templates are used for reporting every measurement and description: "EchoMeasurement" and "EchoDescription". Intermediate level templates specify how to report the features of echoDoppler findings: "Spectral Curve", "Color Jet", "Intracardiac mass". Templates for every cardiovascular structure include the previous ones. "Echocardiography Procedure Report" includes all other templates. The templates were tested in reporting echo features of 100 patients by analyzing 500 DICOM images. The benefits of these templates has been proven during the testing process, through the quality of the echocardiography report, the ability to argue and to link every diagnostic feature to a defining image and by opening up opportunities for education, research. In the future, our template-based reporting methodology might be extended to other imaging modalities.

  20. A comparative study of multi-sensor data fusion methods for highly accurate assessment of manufactured parts

    NASA Astrophysics Data System (ADS)

    Hannachi, Ammar; Kohler, Sophie; Lallement, Alex; Hirsch, Ernest

    2015-04-01

    3D modeling of scene contents takes an increasing importance for many computer vision based applications. In particular, industrial applications of computer vision require efficient tools for the computation of this 3D information. Routinely, stereo-vision is a powerful technique to obtain the 3D outline of imaged objects from the corresponding 2D images. As a consequence, this approach provides only a poor and partial description of the scene contents. On another hand, for structured light based reconstruction techniques, 3D surfaces of imaged objects can often be computed with high accuracy. However, the resulting active range data in this case lacks to provide data enabling to characterize the object edges. Thus, in order to benefit from the positive points of various acquisition techniques, we introduce in this paper promising approaches, enabling to compute complete 3D reconstruction based on the cooperation of two complementary acquisition and processing techniques, in our case stereoscopic and structured light based methods, providing two 3D data sets describing respectively the outlines and surfaces of the imaged objects. We present, accordingly, the principles of three fusion techniques and their comparison based on evaluation criterions related to the nature of the workpiece and also the type of the tackled application. The proposed fusion methods are relying on geometric characteristics of the workpiece, which favour the quality of the registration. Further, the results obtained demonstrate that the developed approaches are well adapted for 3D modeling of manufactured parts including free-form surfaces and, consequently quality control applications using these 3D reconstructions.

  1. X-ray coherent scattering tomography of textured material (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhu, Zheyuan; Pang, Shuo

    2017-05-01

    Small-angle X-ray scattering (SAXS) measures the signature of angular-dependent coherently scattered X-rays, which contains richer information in material composition and structure compared to conventional absorption-based computed tomography. SAXS image reconstruction method of a 2 or 3 dimensional object based on computed tomography, termed as coherent scattering computed tomography (CSCT), enables the detection of spatially-resolved, material-specific isotropic scattering signature inside an extended object, and provides improved contrast for medical diagnosis, security screening, and material characterization applications. However, traditional CSCT methods assumes materials are fine powders or amorphous, and possess isotropic scattering profiles, which is not generally true for all materials. Anisotropic scatters cannot be captured using conventional CSCT method and result in reconstruction errors. To obtain correct information from the sample, we designed new imaging strategy which incorporates extra degree of detector motion into X-ray scattering tomography for the detection of anisotropic scattered photons from a series of two-dimensional intensity measurements. Using a table-top, narrow-band X-ray source and a panel detector, we demonstrate the anisotropic scattering profile captured from an extended object and the reconstruction of a three-dimensional object. For materials possessing a well-organized crystalline structure with certain symmetry, the scatter texture is more predictable. We will also discuss the compressive schemes and implementation of data acquisition to improve the collection efficiency and accelerate the imaging process.

  2. Optical design of free-form surface two-mirror telescopic objective with ultrawide field of view

    NASA Astrophysics Data System (ADS)

    Liu, Qinghan; Zhou, Zhengping; Jin, Yangming; Shen, Weimin

    2016-10-01

    Compact off-axial two-mirror fore objective with an ultra wide ground coverage and for spaceborne pushbroom imaging spectrometers is studied and designed. Based on Gaussian optics and Young's formulas, the approach to determine its initial structural parameters is presented. In order to meet the required performance, freeform surfaces are used to increase the degree of freedom of our optimization. And the impact of various X-Y polynomials on its pupil aberration is analyzed for elimination of too large smile effect. As an example, an off-axis two-mirror fore telescopic objective with field of view of 108° across-pushbroom direction, F number of 10, focal length of 34 mm and working wavelength range from 0.27 to 2.4 μm is optimally designed, which both the primary and the secondary mirrors have freeform surface. The designed lens has many advantages of simple and compact structure, imagery telecentricity, near diffraction-limited imaging quality, and small smile effect.

  3. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  4. The Research on Dryland Crop Classification Based on the Fusion of SENTINEL-1A SAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; He, J.; Wen, Q.; Yu, F.; Gu, X.; Wang, Z.

    2018-04-01

    In recent years, the quick upgrading and improvement of SAR sensors provide beneficial complements for the traditional optical remote sensing in the aspects of theory, technology and data. In this paper, Sentinel-1A SAR data and GF-1 optical data were selected for image fusion, and more emphases were put on the dryland crop classification under a complex crop planting structure, regarding corn and cotton as the research objects. Considering the differences among various data fusion methods, the principal component analysis (PCA), Gram-Schmidt (GS), Brovey and wavelet transform (WT) methods were compared with each other, and the GS and Brovey methods were proved to be more applicable in the study area. Then, the classification was conducted based on the object-oriented technique process. And for the GS, Brovey fusion images and GF-1 optical image, the nearest neighbour algorithm was adopted to realize the supervised classification with the same training samples. Based on the sample plots in the study area, the accuracy assessment was conducted subsequently. The values of overall accuracy and kappa coefficient of fusion images were all higher than those of GF-1 optical image, and GS method performed better than Brovey method. In particular, the overall accuracy of GS fusion image was 79.8 %, and the Kappa coefficient was 0.644. Thus, the results showed that GS and Brovey fusion images were superior to optical images for dryland crop classification. This study suggests that the fusion of SAR and optical images is reliable for dryland crop classification under a complex crop planting structure.

  5. Long-range and depth-selective imaging of macroscopic targets using low-coherence and wide-field interferometry (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik

    2016-03-01

    With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.

  6. Effects of pure and hybrid iterative reconstruction algorithms on high-resolution computed tomography in the evaluation of interstitial lung disease.

    PubMed

    Katsura, Masaki; Sato, Jiro; Akahane, Masaaki; Mise, Yoko; Sumida, Kaoru; Abe, Osamu

    2017-08-01

    To compare image quality characteristics of high-resolution computed tomography (HRCT) in the evaluation of interstitial lung disease using three different reconstruction methods: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Eighty-nine consecutive patients with interstitial lung disease underwent standard-of-care chest CT with 64-row multi-detector CT. HRCT images were reconstructed in 0.625-mm contiguous axial slices using FBP, ASIR, and MBIR. Two radiologists independently assessed the images in a blinded manner for subjective image noise, streak artifacts, and visualization of normal and pathologic structures. Objective image noise was measured in the lung parenchyma. Spatial resolution was assessed by measuring the modulation transfer function (MTF). MBIR offered significantly lower objective image noise (22.24±4.53, P<0.01 among all pairs, Student's t-test) compared with ASIR (39.76±7.41) and FBP (51.91±9.71). MTF (spatial resolution) was increased using MBIR compared with ASIR and FBP. MBIR showed improvements in visualization of normal and pathologic structures over ASIR and FBP, while ASIR was rated quite similarly to FBP. MBIR significantly improved subjective image noise (P<0.01 among all pairs, the sign test), and streak artifacts (P<0.01 each for MBIR vs. the other 2 image data sets). MBIR provides high-quality HRCT images for interstitial lung disease by reducing image noise and streak artifacts and improving spatial resolution compared with ASIR and FBP. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Early Changes in Facial Profile Following Structured Filler Rhinoplasty: An Anthropometric Analysis Using a 3-Dimensional Imaging System.

    PubMed

    Rho, Nark Kyoung; Park, Je Young; Youn, Choon Shik; Lee, Soo-Keun; Kim, Hei Sung

    2017-02-01

    Quantitative measurements are important for objective evaluation of postprocedural outcomes. Three-dimensional (3D) imaging is known as an objective, accurate, and reliable system for quantifying the soft tissue dimensions of the face. To compare the preprocedural and acute postprocedural nasofrontal, nasofacial, nasolabial, and nasomental angles, early changes in the height and length of the nose, and nasal volume using a 3D surface imaging with a light-emitting diode. The 3D imaging analysis of 40 Korean women who underwent structured nonsurgical rhinoplasty was conducted. The 3D assessment was performed before, immediately after, 1 day, and 2 weeks after filler rhinoplasty with a Morpheus 3D scanner (Morpheus Co., Seoul, Korea). There were significant early changes in facial profile following nonsurgical rhinoplasty with a hyaluronic acid filler. An average increase of 6.03° in the nasofrontal angle, an increase of 3.79° in the nasolabial angle, increase of 0.88° in the nasomental angle, and a reduction of 0.83° in the nasofacial angle was observed at 2 weeks of follow-up. Increment in nasal volume and nose height was also found after 2 weeks. Side effects, such as hematoma, nodules, and skin necrosis, were not observed. The 3D surface imaging quantitatively demonstrated the early changes in facial profile after structured filler rhinoplasty. The study results describe significant acute spatial changes in nose shape following treatment.

  8. Solving the inverse scattering problem in reflection-mode dynamic speckle-field phase microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie; So, Peter T. C.; Yaqoob, Zahid; Jin, Di; Hosseini, Poorya; Kuang, Cuifang; Singh, Vijay Raj; Kim, Yang-Hyo; Dasari, Ramachandra R.

    2017-02-01

    Most of the quantitative phase microscopy systems are unable to provide depth-resolved information for measuring complex biological structures. Optical diffraction tomography provides a non-trivial solution to it by 3D reconstructing the object with multiple measurements through different ways of realization. Previously, our lab developed a reflection-mode dynamic speckle-field phase microscopy (DSPM) technique, which can be used to perform depth resolved measurements in a single shot. Thus, this system is suitable for measuring dynamics in a layer of interest in the sample. DSPM can be also used for tomographic imaging, which promises to solve the long-existing "missing cone" problem in 3D imaging. However, the 3D imaging theory for this type of system has not been developed in the literature. Recently, we have developed an inverse scattering model to rigorously describe the imaging physics in DSPM. Our model is based on the diffraction tomography theory and the speckle statistics. Using our model, we first precisely calculated the defocus response and the depth resolution in our system. Then, we further calculated the 3D coherence transfer function to link the 3D object structural information with the axially scanned imaging data. From this transfer function, we found that in the reflection mode excellent sectioning effect exists in the low lateral spatial frequency region, thus allowing us to solve the "missing cone" problem. Currently, we are working on using this coherence transfer function to reconstruct layered structures and complex cells.

  9. A Butterfly in the Making: Revealing the Near-Infrared Structure of Hubble 12

    NASA Technical Reports Server (NTRS)

    Hora, Joseph L.; Latter, William B.

    1996-01-01

    We present deep narrowband near-IR images and moderate resolution spectra of the young planetary nebula Hubble 12. These data are the first to show clearly the complex structure for this important planetary nebula. Images were obtained at lambda = 2.12, 2.16, and 2.26 micron. The lambda = 2.12 Am image reveals the bipolar nature of the nebula, as well as complex structure near the central star in the equatorial region. The images show an elliptical region of emission, which may indicate a ring or a cylindrical source structure. This structure is possibly related to the mechanism that is producing the bipolar flow. The spectra show the nature of several distinct components. The central object is dominated by recombination lines of H I and He I. The core is not a significant source of molecular hydrogen emission. The east position in the equatorial region is rich in lines of ultraviolet-excited fluorescent H2. A spectrum of part of the central region shows strong [Fe II] emission, which might indicate the presence of shocks.

  10. Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring.

    PubMed

    Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos

    2016-09-07

    This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure.

  11. A 3D camera for improved facial recognition

    NASA Astrophysics Data System (ADS)

    Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim

    2004-12-01

    We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.

  12. Electron holography—basics and applications

    NASA Astrophysics Data System (ADS)

    Lichte, Hannes; Lehmann, Michael

    2008-01-01

    Despite the huge progress achieved recently by means of the corrector for aberrations, allowing now a true atomic resolution of 0.1 nm, hence making it an unrivalled tool for nanoscience, transmission electron microscopy (TEM) suffers from a severe drawback: in a conventional electron micrograph only a poor phase contrast can be achieved, i.e. phase structures are virtually invisible. Therefore, conventional TEM is nearly blind for electric and magnetic fields, which are pure phase objects. Since such fields provoked by the atomic structure, e.g. of semiconductors and ferroelectrics, largely determine the solid state properties, hence the importance for high technology applications, substantial object information is missing. Electron holography in TEM offers the solution: by superposition with a coherent reference wave, a hologram is recorded, from which the image wave can be completely reconstructed by amplitude and phase. Now the object is displayed quantitatively in two separate images: one representing the amplitude, the other the phase. From the phase image, electric and magnetic fields can be determined quantitatively in the range from micrometre down to atomic dimensions by all wave optical methods that one can think of, both in real space and in Fourier space. Electron holography is pure wave optics. Therefore, we discuss the basics of coherence and interference, the implementation into a TEM, the path of rays for recording holograms as well as the limits in lateral and signal resolution. We outline the methods of reconstructing the wave by numerical image processing and procedures for extracting the object properties of interest. Furthermore, we present a broad spectrum of applications both at mesoscopic and atomic dimensions. This paper gives an overview of the state of the art pointing at the needs for further development. It is also meant as encouragement for those who refrain from holography, thinking that it can only be performed by specialists in highly specialized laboratories. In fact, a modern TEM built for atomic resolution and equipped with a field emitter or a Schottky emitter, well aligned by a skilled operator, can deliver good holograms. Running commercially available image processing software and mathematics programs on a laptop-computer is sufficient for reconstruction of the amplitude and phase images and extracting desirable object information.

  13. Virtual landmarks

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Bai, Peirui; Torigian, Drew A.

    2017-03-01

    Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.

  14. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.

    PubMed

    Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi

    2018-03-24

    In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.

  15. A Tentative Application Of Morphological Filters To Time-Varying Images

    NASA Astrophysics Data System (ADS)

    Billard, D.; Poquillon, B.

    1989-03-01

    In this paper, morphological filters, which are commonly used to process either 2D or multidimensional static images, are generalized to the analysis of time-varying image sequence. The introduction of the time dimension induces then interesting prop-erties when designing such spatio-temporal morphological filters. In particular, the specification of spatio-temporal structuring ele-ments (equivalent to time-varying spatial structuring elements) can be adjusted according to the temporal variations of the image sequences to be processed : this allows to derive specific morphological transforms to perform noise filtering or moving objects discrimination on dynamic images viewed by a non-stationary sensor. First, a brief introduction to the basic principles underlying morphological filters will be given. Then, a straightforward gener-alization of these principles to time-varying images will be pro-posed. This will lead us to define spatio-temporal opening and closing and to introduce some of their possible applications to process dynamic images. At last, preliminary results obtained us-ing a natural forward looking infrared (FUR) image sequence are presented.

  16. 3D Filament Network Segmentation with Multiple Active Contours

    NASA Astrophysics Data System (ADS)

    Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei

    2014-03-01

    Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and microtubules. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we developed a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D TIRF Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy.

  17. Feature hashing for fast image retrieval

    NASA Astrophysics Data System (ADS)

    Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui

    2018-03-01

    Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.

  18. Imaging through ground-level turbulence by Fourier telescopy: Simulations and preliminary experiments

    NASA Astrophysics Data System (ADS)

    Randunu Pathirannehelage, Nishantha

    Fourier telescopy imaging is a recently-developed imaging method that relies on active structured-light illumination of the object. Reflected/scattered light is measured by a large "light bucket" detector; processing of the detected signal yields the magnitude and phase of spatial frequency components of the object reflectance or transmittance function. An inverse Fourier transform results in the image. In 2012 a novel method, known as time-average Fourier telescopy (TAFT), was introduced by William T. Rhodes as a means for diffraction-limited imaging through ground-level atmospheric turbulence. This method, which can be applied to long horizontal-path terrestrial imaging, addresses a need that is not solved by the adaptive optics methods being used in astronomical imaging. Field-experiment verification of the TAFT concept requires instrumentation that is not available at Florida Atlantic University. The objective of this doctoral research program is thus to demonstrate, in the absence of full-scale experimentation, the feasibility of time-average Fourier telescopy through (a) the design, construction, and testing of small-scale laboratory instrumentation capable of exploring basic Fourier telescopy data-gathering operations, and (b) the development of MATLAB-based software capable of demonstrating the effect of kilometer-scale passage of laser beams through ground-level turbulence in a numerical simulation of TAFT.

  19. Microscopy using source and detector arrays

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Castello, Marco; Vicidomini, Giuseppe; Duocastella, Martí; Diaspro, Alberto

    2016-03-01

    There are basically two types of microscope, which we call conventional and scanning. The former type is a full-field imaging system. In the latter type, the object is illuminated with a probe beam, and a signal detected. We can generalize the probe to a patterned illumination. Similarly we can generalize the detection to a patterned detection. Combining these we get a range of different modalities: confocal microscopy, structured illumination (with full-field imaging), spinning disk (with multiple illumination points), and so on. The combination allows the spatial frequency bandwidth of the system to be doubled. In general we can record a four dimensional (4D) image of a 2D object (or a 6D image from a 3D object, using an acoustic tuneable lens). The optimum way to directly reconstruct the resulting image is by image scanning microscopy (ISM). But the 4D image is highly redundant, so deconvolution-based approaches are also relevant. ISM can be performed in fluorescence, bright field or interference microscopy. Several different implementations have been described, with associated advantages and disadvantages. In two-photon microscopy, the illumination and detection point spread functions are very different. This is also the case when using pupil filters or when there is a large Stokes shift.

  20. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  1. The artist's advantage: Better integration of object information across eye movements

    PubMed Central

    Perdreau, Florian; Cavanagh, Patrick

    2013-01-01

    Over their careers, figurative artists spend thousands of hours analyzing objects and scene layout. We examined what impact this extensive training has on the ability to encode complex scenes, comparing participants with a wide range of training and drawing skills on a possible versus impossible objects task. We used a gaze-contingent display to control the amount of information the participants could sample on each fixation either from central or peripheral visual field. Test objects were displayed and participants reported, as quickly as possible, whether the object was structurally possible or not. Our results show that when viewing the image through a small central window, performance improved with the years of training, and to a lesser extent with the level of skill. This suggests that the extensive training itself confers an advantage for integrating object structure into more robust object descriptions. PMID:24349697

  2. Tactile Imaging of an Imbedded Palpable Structure for Breast Cancer Screening

    PubMed Central

    2015-01-01

    Apart from texture, the human finger can sense palpation. The detection of an imbedded structure is a fine balance between the relative stiffness of the matrix, the object, and the device. If the device is too soft, its high responsiveness will limit the depth to which the imbedded structure can be detected. The sensation of palpation is an effective procedure for a physician to examine irregularities. In a clinical breast examination (CBE), by pressing over 1 cm2 area, at a contact pressure in the 70–90 kPa range, the physician feels cancerous lumps that are 8- to 18-fold stiffer than surrounding tissue. Early detection of a lump in the 5–10 mm range leads to an excellent prognosis. We describe a thin-film tactile device that emulates human touch to quantify CBE by imaging the size and shape of 5–10 mm objects at 20 mm depth in a breast model using ∼80 kPa pressure. The linear response of the device allows quantification where the greyscale corresponds to the relative local stiffness. The (background) signal from <2.5-fold stiffer objects at a size below 2 mm is minimal. PMID:25148477

  3. Gland segmentation in prostate histopathological images

    PubMed Central

    Singh, Malay; Kalaw, Emarene Mationg; Giron, Danilo Medina; Chong, Kian-Tai; Tan, Chew Lim; Lee, Hwee Kuan

    2017-01-01

    Abstract. Glandular structural features are important for the tumor pathologist in the assessment of cancer malignancy of prostate tissue slides. The varying shapes and sizes of glands combined with the tedious manual observation task can result in inaccurate assessment. There are also discrepancies and low-level agreement among pathologists, especially in cases of Gleason pattern 3 and pattern 4 prostate adenocarcinoma. An automated gland segmentation system can highlight various glandular shapes and structures for further analysis by the pathologist. These objective highlighted patterns can help reduce the assessment variability. We propose an automated gland segmentation system. Forty-three hematoxylin and eosin-stained images were acquired from prostate cancer tissue slides and were manually annotated for gland, lumen, periacinar retraction clefting, and stroma regions. Our automated gland segmentation system was trained using these manual annotations. It identifies these regions using a combination of pixel and object-level classifiers by incorporating local and spatial information for consolidating pixel-level classification results into object-level segmentation. Experimental results show that our method outperforms various texture and gland structure-based gland segmentation algorithms in the literature. Our method has good performance and can be a promising tool to help decrease interobserver variability among pathologists. PMID:28653016

  4. Fast detection of the main anatomical structures in digital retinal images based on intra- and inter-structure relational knowledge.

    PubMed

    Molina-Casado, José M; Carmona, Enrique J; García-Feijoó, Julián

    2017-10-01

    The anatomical structure detection in retinal images is an open problem. However, most of the works in the related literature are oriented to the detection of each structure individually or assume the previous detection of a structure which is used as a reference. The objective of this paper is to obtain simultaneous detection of the main retinal structures (optic disc, macula, network of vessels and vascular bundle) in a fast and robust way. We propose a new methodology oriented to accomplish the mentioned objective. It consists of two stages. In an initial stage, a set of operators is applied to the retinal image. Each operator uses intra-structure relational knowledge in order to produce a set of candidate blobs that belongs to the desired structure. In a second stage, a set of tuples is created, each of which contains a different combination of the candidate blobs. Next, filtering operators, using inter-structure relational knowledge, are used in order to find the winner tuple. A method using template matching and mathematical morphology is implemented following the proposed methodology. A success is achieved if the distance between the automatically detected blob center and the actual structure center is less than or equal to one optic disc radius. The success rates obtained in the different public databases analyzed were: MESSIDOR (99.33%, 98.58%, 97.92%), DIARETDB1 (96.63%, 100%, 97.75%), DRIONS (100%, n/a, 100%) and ONHSD (100%, 98.85%, 97.70%) for optic disc (OD), macula (M) and vascular bundle (VB), respectively. Finally, the overall success rate obtained in this study for each structure was: 99.26% (OD), 98.69% (M) and 98.95% (VB). The average time of processing per image was 4.16 ± 0.72 s. The main advantage of the use of inter-structure relational knowledge was the reduction of the number of false positives in the detection process. The implemented method is able to simultaneously detect four structures. It is fast, robust and its detection results are competitive in relation to other methods of the recent literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Generation of anatomically realistic numerical phantoms for photoacoustic and ultrasonic breast imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Zhou, Weimin; Matthews, Thomas P.; Appleton, Catherine M.; Anastasio, Mark A.

    2017-04-01

    Photoacoustic computed tomography (PACT) and ultrasound computed tomography (USCT) are emerging modalities for breast imaging. As in all emerging imaging technologies, computer-simulation studies play a critically important role in developing and optimizing the designs of hardware and image reconstruction methods for PACT and USCT. Using computer-simulations, the parameters of an imaging system can be systematically and comprehensively explored in a way that is generally not possible through experimentation. When conducting such studies, numerical phantoms are employed to represent the physical properties of the patient or object to-be-imaged that influence the measured image data. It is highly desirable to utilize numerical phantoms that are realistic, especially when task-based measures of image quality are to be utilized to guide system design. However, most reported computer-simulation studies of PACT and USCT breast imaging employ simple numerical phantoms that oversimplify the complex anatomical structures in the human female breast. We develop and implement a methodology for generating anatomically realistic numerical breast phantoms from clinical contrast-enhanced magnetic resonance imaging data. The phantoms will depict vascular structures and the volumetric distribution of different tissue types in the breast. By assigning optical and acoustic parameters to different tissue structures, both optical and acoustic breast phantoms will be established for use in PACT and USCT studies.

  6. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  7. Real-time inspection by submarine images

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe

    1996-10-01

    A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.

  8. Multi-clues image retrieval based on improved color invariants

    NASA Astrophysics Data System (ADS)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  9. Compact hybrid optoelectrical unit for image processing and recognition

    NASA Astrophysics Data System (ADS)

    Cheng, Gang; Jin, Guofan; Wu, Minxian; Liu, Haisong; He, Qingsheng; Yuan, ShiFu

    1998-07-01

    In this paper a compact opto-electric unit (CHOEU) for digital image processing and recognition is proposed. The central part of CHOEU is an incoherent optical correlator, which is realized with a SHARP QA-1200 8.4 inch active matrix TFT liquid crystal display panel which is used as two real-time spatial light modulators for both the input image and reference template. CHOEU can do two main processing works. One is digital filtering; the other is object matching. Using CHOEU an edge-detection operator is realized to extract the edges from the input images. Then the reprocessed images are sent into the object recognition unit for identifying the important targets. A novel template- matching method is proposed for gray-tome image recognition. A positive and negative cycle-encoding method is introduced to realize the absolute difference measurement pixel- matching on a correlator structure simply. The system has god fault-tolerance ability for rotation distortion, Gaussian noise disturbance or information losing. The experiments are given at the end of this paper.

  10. Near-infrared images of MG 1131+0456 with the W. M. Keck telescope: Another dusty gravitational lens?

    NASA Technical Reports Server (NTRS)

    Larkin, J. E.; Matthews, K.; Lawrence, C. R.; Graham, J. R.; Harrison, W.; Jernigan, G.; Lin, S.; Nelson, J.; Neugebauer, G.; Smith, G.

    1994-01-01

    Images of the gravitational lens system MG 1131+0456 taken with the near-infrared camera on the W. M. Keck telescope in the J and K(sub s) bands show that the infrared counterparts of the compact radio structure are exceedingly red, with J - K greater than 4.2 mag. The J image reveals only the lensing galaxy, while the K(sub s) image shows both the lens and the infrared counterparts of the compact radio components. After subtracting the lensing galaxy from the K(sub s) image, the position and orientation of the compact components agree with their radio counterparts. The broad-band spectrum and observed brightness of the lens suggest a giant galaxy at a redshift of approximately 0.75, while the color of the quasar images suggests significant extinction by dust in the lens. There is a significant excess of faint objects within 20 sec of MG 1131+0456. Depending on their mass and redshifts, these objects could complicate the lensing potential considerably.

  11. Multiphoton imaging microscopy at deeper layers with adaptive optics control of spherical aberration.

    PubMed

    Bueno, Juan M; Skorsetz, Martin; Palacios, Raquel; Gualda, Emilio J; Artal, Pablo

    2014-01-01

    Despite the inherent confocality and optical sectioning capabilities of multiphoton microscopy, three-dimensional (3-D) imaging of thick samples is limited by the specimen-induced aberrations. The combination of immersion objectives and sensorless adaptive optics (AO) techniques has been suggested to overcome this difficulty. However, a complex plane-by-plane correction of aberrations is required, and its performance depends on a set of image-based merit functions. We propose here an alternative approach to increase penetration depth in 3-D multiphoton microscopy imaging. It is based on the manipulation of the spherical aberration (SA) of the incident beam with an AO device while performing fast tomographic multiphoton imaging. When inducing SA, the image quality at best focus is reduced; however, better quality images are obtained from deeper planes within the sample. This is a compromise that enables registration of improved 3-D multiphoton images using nonimmersion objectives. Examples on ocular tissues and nonbiological samples providing different types of nonlinear signal are presented. The implementation of this technique in a future clinical instrument might provide a better visualization of corneal structures in living eyes.

  12. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. An edge-directed interpolation method for fetal spine MR images.

    PubMed

    Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin

    2013-10-10

    Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.

  14. Multi-image mosaic with SIFT and vision measurement for microscale structures processed by femtosecond laser

    NASA Astrophysics Data System (ADS)

    Wang, Fu-Bin; Tu, Paul; Wu, Chen; Chen, Lei; Feng, Ding

    2018-01-01

    In femtosecond laser processing, the field of view of each image frame of the microscale structure is extremely small. In order to obtain the morphology of the whole microstructure, a multi-image mosaic with partially overlapped regions is required. In the present work, the SIFT algorithm for mosaic images was analyzed theoretically, and by using multiple images of a microgroove structure processed by femtosecond laser, a stitched image of the whole groove structure could be studied experimentally and realized. The object of our research concerned a silicon wafer with a microgroove structure ablated by femtosecond laser. First, we obtained microgrooves at a width of 380 μm at different depths. Second, based on the gray image of the microgroove, a multi-image mosaic with slot width and slot depth was realized. In order to improve the image contrast between the target and the background, and taking the slot depth image as an example, a multi-image mosaic was then realized using pseudo color enhancement. Third, in order to measure the structural size of the microgroove with the image, a known width streak ablated by femtosecond laser at 20 mW was used as a calibration sample. Through edge detection, corner extraction, and image correction for the streak images, we calculated the pixel width of the streak image and found the measurement ratio constant Kw in the width direction, and then obtained the proportional relationship between a pixel and a micrometer. Finally, circular spot marks ablated by femtosecond laser at 2 mW and 15 mW were used as test images, and proving that the value Kw was correct, the measurement ratio constant Kh in the height direction was obtained, and the image measurements for a microgroove of 380 × 117 μm was realized based on a measurement ratio constant Kw and Kh. The research and experimental results show that the image mosaic, image calibration, and geometric image parameter measurements for the microstructural image ablated by femtosecond laser were realized effectively.

  15. Reply to “Comment on ‘Near-surface location, geometry, and velocities of the Santa Monica fault zone, Los Angeles, California’ by R. D. Catchings, G. Gandhok, M. R. Goldman, D. Okaya, M. J. Rymer, and G. W. Bawden” by T. L. Pratt and J. F. Dolan

    USGS Publications Warehouse

    Catchings, Rufus D.; Rymer, Michael J.; Goldman, Mark R.; Bawden, Gerald W.

    2010-01-01

    In a comment on our 2008 paper (Catchings, Gandhok, et al., 2008) on the Santa Monica fault in Los Angeles, California, Pratt and Dolan (2010) (herein referred to as P&D) cite numerous objections to our work, inferring that our study is flawed. However, as shown in our reply, their objections contradict their own published works, published works of others, and proven seismic methodologies. Rather than responding to each repeated invalid objection, we address their objections by topic in the subsequent sections.In Catchings, Gandhok, et al. (2008), we presented high-resolution seismic-reflection images that showed two near-surface faults in the upper 50 m beneath the grounds of the Wadsworth Veterans Administration Hospital (WVAH). Although P&D suggest we effectively duplicated their seismic acquisition, our survey was not a duplication of their efforts. Rather, we conducted a seismic-imaging survey over a similar profile as Pratt et al. (1998) but used a different data acquisition system and different data processing methods to evaluate methods of seismically imaging blind faults in the wake of the 17 January 1994 M 6.7 Northridge earthquake. We used an acquisition method that provides both tomographic seismic velocities and reflection images. Our combined-data approach allowed for shallower imaging (∼2.5 m minimum) than the ∼20-m minimum of Pratt et al. (1998), clearer images of the fault zone, and more accurate depth determinations (rather than time images). In processing the reflection images, we used prestack depth migration, which is generally accepted as the only proper imaging method for imaging subsurface structures with strong lateral velocity variations (Versteeg, 1993), a condition shown to exist at the WVAH site. We correlated our reflection images with refraction tomography images, borehole lithology, and velocity data, Interferometric Synthetic Aperture Radar images, and changes in groundwater depths. Except for some minor differences, our seismic-reflection images coincide with previously published seismic-reflection images by Dolan and Pratt (1997) and Pratt et al. (1998), and a paleoseismic study by Dolan et al. (2000). Principal differences among our interpretations and those of Pratt et al. (1998) relate to the upper 20 m and the south side of the fault, which Pratt et al. (1998) did not clearly image. In contrast, our seismic images included structures on both sides of the fault zone from about 2.5 m depth to about 100 m depth at WVAH, allowing us to interpret more details.

  16. Cortical Gray Matter in Attention-Deficit/Hyperactivity Disorder: A Structural Magnetic Resonance Imaging Study

    ERIC Educational Resources Information Center

    Batty, Martin J.; Liddle, Elizabeth B.; Pitiot, Alain; Toro, Roberto; Groom, Madeleine J.; Scerif, Gaia; Liotti, Mario; Liddle, Peter F.; Paus, Tomas; Hollis, Chris

    2010-01-01

    Objective: Previous studies have shown smaller brain volume and less gray matter in children with attention-deficit/hyperactivity disorder (ADHD). Relatively few morphological studies have examined structures thought to subserve inhibitory control, one of the diagnostic features of ADHD. We examined one such region, the pars opercularis,…

  17. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom

    NASA Astrophysics Data System (ADS)

    Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.

    2001-05-01

    We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.

  18. Dermoscopy-guided reflectance confocal microscopy of skin using high-NA objective lens with integrated wide-field color camera

    NASA Astrophysics Data System (ADS)

    Dickensheets, David L.; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind

    2016-02-01

    Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.

  19. Dermoscopy-guided reflectance confocal microscopy of skin using high-NA objective lens with integrated wide-field color camera.

    PubMed

    Dickensheets, David L; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind

    2016-02-01

    Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.

  20. NASA's Hubble Sees Asteroid Spout Six Comet-like Tails

    NASA Image and Video Library

    2013-11-13

    This NASA Hubble Space Telescope set of images reveals a never-before-seen set of six comet-like tails radiating from a body in the asteroid belt, designated P/2013 P5. The asteroid was discovered as an unusually fuzzy-looking object with the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) survey telescope in Hawaii. The multiple tails were discovered in Hubble images taken on Sept. 10, 2013. When Hubble returned to the asteroid on Sept. 23, the asteroid's appearance had totally changed. It looked as if the entire structure had swung around. One interpretation is that the asteroid's rotation rate has been increased to the point where dust is falling off the surface and escaping into space where the pressure of sunlight sweeps out fingerlike tails. According to this theory, the asteroid's spin has been accelerated by the gentle push of sunlight. The object, estimated to be no more than 1,400 feet across, has ejected dust for at least five months, based on analysis of the tail structure. These visible-light, false-color images were taken with Hubble's Wide Field Camera 3. Object Name: P/2013 P5 Image Type: Astronomical/Annotated Credit: NASA, ESA, and D. Jewitt (UCLA) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. NASA's Hubble Sees Asteroid Spout Six Comet-like Tails

    NASA Image and Video Library

    2013-11-13

    P/2013 P5 on September 23, 2013. --- This NASA Hubble Space Telescope set of images reveals a never-before-seen set of six comet-like tails radiating from a body in the asteroid belt, designated P/2013 P5. The asteroid was discovered as an unusually fuzzy-looking object with the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) survey telescope in Hawaii. The multiple tails were discovered in Hubble images taken on Sept. 10, 2013. When Hubble returned to the asteroid on Sept. 23, the asteroid's appearance had totally changed. It looked as if the entire structure had swung around. One interpretation is that the asteroid's rotation rate has been increased to the point where dust is falling off the surface and escaping into space where the pressure of sunlight sweeps out fingerlike tails. According to this theory, the asteroid's spin has been accelerated by the gentle push of sunlight. The object, estimated to be no more than 1,400 feet across, has ejected dust for at least five months, based on analysis of the tail structure. These visible-light, false-color images were taken with Hubble's Wide Field Camera 3. Object Name: P/2013 P5 Image Type: Astronomical/Annotated Credit: NASA, ESA, and D. Jewitt (UCLA) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. NASA's Hubble Sees Asteroid Spout Six Comet-like Tails

    NASA Image and Video Library

    2013-11-13

    P/2013 P5 on September 10, 2013. --- This NASA Hubble Space Telescope set of images reveals a never-before-seen set of six comet-like tails radiating from a body in the asteroid belt, designated P/2013 P5. The asteroid was discovered as an unusually fuzzy-looking object with the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) survey telescope in Hawaii. The multiple tails were discovered in Hubble images taken on Sept. 10, 2013. When Hubble returned to the asteroid on Sept. 23, the asteroid's appearance had totally changed. It looked as if the entire structure had swung around. One interpretation is that the asteroid's rotation rate has been increased to the point where dust is falling off the surface and escaping into space where the pressure of sunlight sweeps out fingerlike tails. According to this theory, the asteroid's spin has been accelerated by the gentle push of sunlight. The object, estimated to be no more than 1,400 feet across, has ejected dust for at least five months, based on analysis of the tail structure. These visible-light, false-color images were taken with Hubble's Wide Field Camera 3. Object Name: P/2013 P5 Image Type: Astronomical/Annotated Credit: NASA, ESA, and D. Jewitt (UCLA) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. A Manual Segmentation Tool for Three-Dimensional Neuron Datasets.

    PubMed

    Magliaro, Chiara; Callara, Alejandro L; Vanello, Nicola; Ahluwalia, Arti

    2017-01-01

    To date, automated or semi-automated software and algorithms for segmentation of neurons from three-dimensional imaging datasets have had limited success. The gold standard for neural segmentation is considered to be the manual isolation performed by an expert. To facilitate the manual isolation of complex objects from image stacks, such as neurons in their native arrangement within the brain, a new Manual Segmentation Tool (ManSegTool) has been developed. ManSegTool allows user to load an image stack, scroll down the images and to manually draw the structures of interest stack-by-stack. Users can eliminate unwanted regions or split structures (i.e., branches from different neurons that are too close each other, but, to the experienced eye, clearly belong to a unique cell), to view the object in 3D and save the results obtained. The tool can be used for testing the performance of a single-neuron segmentation algorithm or to extract complex objects, where the available automated methods still fail. Here we describe the software's main features and then show an example of how ManSegTool can be used to segment neuron images acquired using a confocal microscope. In particular, expert neuroscientists were asked to segment different neurons from which morphometric variables were subsequently extracted as a benchmark for precision. In addition, a literature-defined index for evaluating the goodness of segmentation was used as a benchmark for accuracy. Neocortical layer axons from a DIADEM challenge dataset were also segmented with ManSegTool and compared with the manual "gold-standard" generated for the competition.

  4. Resolution and throughput optimized intraoperative spectrally encoded coherence tomography and reflectometry (iSECTR) for multimodal imaging during ophthalmic microsurgery

    NASA Astrophysics Data System (ADS)

    Malone, Joseph D.; El-Haddad, Mohamed T.; Leeburg, Kelsey C.; Terrones, Benjamin D.; Tao, Yuankai K.

    2018-02-01

    Limited visualization of semi-transparent structures in the eye remains a critical barrier to improving clinical outcomes and developing novel surgical techniques. While increases in imaging speed has enabled intraoperative optical coherence tomography (iOCT) imaging of surgical dynamics, several critical barriers to clinical adoption remain. Specifically, these include (1) static field-of-views (FOVs) requiring manual instrument-tracking; (2) high frame-rates require sparse sampling, which limits FOV; and (3) small iOCT FOV also limits the ability to co-register data with surgical microscopy. We previously addressed these limitations in image-guided ophthalmic microsurgery by developing microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography. Complementary en face images enabled orientation and coregistration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. In addition, we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Unfortunately, our previous system lacked the resolution and optical throughput for in vivo retinal imaging and necessitated removal of cornea and lens. These limitations were predominately a result of optical aberrations from imaging through a shared surgical microscope objective lens, which was modeled as a paraxial surface. Here, we present an optimized intraoperative spectrally encoded coherence tomography and reflectometry (iSECTR) system. We use a novel lens characterization method to develop an accurate model of surgical microscope objective performance and balance out inherent aberrations using iSECTR relay optics. Using this system, we demonstrate in vivo multimodal ophthalmic imaging through a surgical microscope

  5. VA's Integrated Imaging System on three platforms.

    PubMed

    Dayhoff, R E; Maloney, D L; Majurski, W J

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.

  6. VA's Integrated Imaging System on three platforms.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983

  7. Modified-BRISQUE as no reference image quality assessment for structural MR images.

    PubMed

    Chow, Li Sze; Rajagopal, Heshalini

    2017-11-01

    An effective and practical Image Quality Assessment (IQA) model is needed to assess the image quality produced from any new hardware or software in MRI. A highly competitive No Reference - IQA (NR - IQA) model called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) initially designed for natural images were modified to evaluate structural MR images. The BRISQUE model measures the image quality by using the locally normalized luminance coefficients, which were used to calculate the image features. The modified-BRISQUE model trained a new regression model using MR image features and Difference Mean Opinion Score (DMOS) from 775 MR images. Two types of benchmarks: objective and subjective assessments were used as performance evaluators for both original and modified-BRISQUE models. There was a high correlation between the modified-BRISQUE with both benchmarks, and they were higher than those for the original BRISQUE. There was a significant percentage improvement in their correlation values. The modified-BRISQUE was statistically better than the original BRISQUE. The modified-BRISQUE model can accurately measure the image quality of MR images. It is a practical NR-IQA model for MR images without using reference images. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Steganography Detection Using Entropy Measures

    DTIC Science & Technology

    2012-11-16

    latter leads to the level of compression of the image . 3.3. Least Significant Bit ( LSB ) The object of steganography is to prevent suspicion upon the...structured user interface developer tools. Steganography Detection Using Entropy Measures Technical Report By Eduardo Meléndez Universidad Politécnica de ...6 2.3. Different kinds of steganography . . . . . . . . . . . . . . . . . . . . . 6 II. Steganography 8 3. Images and Significance of

  9. Near-ultraviolet imaging of Jupiter's satellite Io with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Paresce, F.; Sartoretti, P.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1992-01-01

    The surface of Jupiter's Galilean satellite Io has been resolved for the first time in the near ultraviolet at 2850 A by the Faint Object Camera (FOC) on the Hubble Space Telescope (HST). The restored images reveal significant surface structure down to the resolution limit of the optical system corresponding to approximately 250 km at the sub-earth point.

  10. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  11. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  12. Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors.

    PubMed

    da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2015-01-01

    The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification.

  13. Terahertz imaging through self-mixing in a quantum cascade laser.

    PubMed

    Dean, Paul; Lim, Yah Leng; Valavanis, Alex; Kliese, Russell; Nikolić, Milan; Khanna, Suraj P; Lachab, Mohammad; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Rakić, Aleksandar D; Linfield, Edmund H; Davies, A Giles

    2011-07-01

    We demonstrate terahertz (THz) frequency imaging using a single quantum cascade laser (QCL) device for both generation and sensing of THz radiation. Detection is achieved by utilizing the effect of self-mixing in the THz QCL, and, specifically, by monitoring perturbations to the voltage across the QCL, induced by light reflected from an external object back into the laser cavity. Self-mixing imaging offers high sensitivity, a potentially fast response, and a simple, compact optical design, and we show that it can be used to obtain high-resolution reflection images of exemplar structures.

  14. Microradiography with Semiconductor Pixel Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, Jan; Cejnarova, Andrea; Dammer, Jiri

    High resolution radiography (with X-rays, neutrons, heavy charged particles, ...) often exploited also in tomographic mode to provide 3D images stands as a powerful imaging technique for instant and nondestructive visualization of fine internal structure of objects. Novel types of semiconductor single particle counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination or direct energy measurement, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. This article shows the application and potential of pixel detectors (such as Medipix2 or TimePix) in different fields of radiation imaging.

  15. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 7. Parallel, Structural, and Optimal Techniques in Vision

    DTIC Science & Technology

    1989-03-01

    Toys, is a model of the dinosaur Tyrannosaurus Rex . This particular test case is characterized by sharply discontinuous depths varying over a wide...are not shown in these figures). 7B-C-13 Figure 7: T. Rex Scene - Figure 8: T. Rex Scene - Left Image of Tinker Right Image Toy Object (j 1/’.) C...8217: Figure 9: T. Rex Scene - Figure 10: T. Rex Scene - Connected Contours Extracted Connected Contours Extracted from Left Image from Right Image 7B-C-14 400

  16. Imaging of the optic nerve and retinal nerve fiber layer: an essential part of glaucoma diagnosis and monitoring.

    PubMed

    Kotowski, Jacek; Wollstein, Gadi; Ishikawa, Hiroshi; Schuman, Joel S

    2014-01-01

    Because glaucomatous damage is irreversible early detection of structural changes in the optic nerve head and retinal nerve fiber layer is imperative for timely diagnosis of glaucoma and monitoring of its progression. Significant improvements in ocular imaging have been made in recent years. Imaging techniques such as optical coherence tomography, scanning laser polarimetry and confocal scanning laser ophthalmoscopy rely on different properties of light to provide objective structural assessment of the optic nerve head, retinal nerve fiber layer and macula. In this review, we discuss the capabilities of these imaging modalities pertinent for diagnosis of glaucoma and detection of progressive glaucomatous damage and provide a review of the current knowledge on the clinical performance of these technologies. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Quantitative evaluation method of the bubble structure of sponge cake by using morphology image processing

    NASA Astrophysics Data System (ADS)

    Tatebe, Hironobu; Kato, Kunihito; Yamamoto, Kazuhiko; Katsuta, Yukio; Nonaka, Masahiko

    2005-12-01

    Now a day, many evaluation methods for the food industry by using image processing are proposed. These methods are becoming new evaluation method besides the sensory test and the solid-state measurement that are using for the quality evaluation. An advantage of the image processing is to be able to evaluate objectively. The goal of our research is structure evaluation of sponge cake by using image processing. In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble structure is one of the important properties to understand characteristics of the cake from the image. In order to take the cake image, first we cut cakes and measured that's surface by using the CIS scanner. Because the depth of field of this type scanner is very shallow, the bubble region of the surface has low gray scale values, and it has a feature that is blur. We extracted bubble regions from the surface images based on these features. First, input image is binarized, and the feature of bubble is extracted by the morphology analysis. In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that our method is as well as the subjectivity evaluation.

  18. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  19. Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings

    PubMed Central

    Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo

    2018-01-01

    In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods. PMID:29596393

  20. Quantitative evaluation of in vivo vital-dye fluorescence endoscopic imaging for the detection of Barrett’s-associated neoplasia

    PubMed Central

    Thekkek, Nadhi; Lee, Michelle H.; Polydorides, Alexandros D.; Rosen, Daniel G.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2015-01-01

    Abstract. Current imaging tools are associated with inconsistent sensitivity and specificity for detection of Barrett’s-associated neoplasia. Optical imaging has shown promise in improving the classification of neoplasia in vivo. The goal of this pilot study was to evaluate whether in vivo vital dye fluorescence imaging (VFI) has the potential to improve the accuracy of early-detection of Barrett’s-associated neoplasia. In vivo endoscopic VFI images were collected from 65 sites in 14 patients with confirmed Barrett’s esophagus (BE), dysplasia, or esophageal adenocarcinoma using a modular video endoscope and a high-resolution microendoscope (HRME). Qualitative image features were compared to histology; VFI and HRME images show changes in glandular structure associated with neoplastic progression. Quantitative image features in VFI images were identified for objective image classification of metaplasia and neoplasia, and a diagnostic algorithm was developed using leave-one-out cross validation. Three image features extracted from VFI images were used to classify tissue as neoplastic or not with a sensitivity of 87.8% and a specificity of 77.6% (AUC=0.878). A multimodal approach incorporating VFI and HRME imaging can delineate epithelial changes present in Barrett’s-associated neoplasia. Quantitative analysis of VFI images may provide a means for objective interpretation of BE during surveillance. PMID:25950645

  1. Quantitative evaluation of in vivo vital-dye fluorescence endoscopic imaging for the detection of Barrett's-associated neoplasia.

    PubMed

    Thekkek, Nadhi; Lee, Michelle H; Polydorides, Alexandros D; Rosen, Daniel G; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2015-05-01

    Current imaging tools are associated with inconsistent sensitivity and specificity for detection of Barrett's-associated neoplasia. Optical imaging has shown promise in improving the classification of neoplasia in vivo. The goal of this pilot study was to evaluate whether in vivo vital dye fluorescence imaging (VFI) has the potential to improve the accuracy of early-detection of Barrett's-associated neoplasia. In vivo endoscopic VFI images were collected from 65 sites in 14 patients with confirmed Barrett's esophagus (BE), dysplasia, oresophageal adenocarcinoma using a modular video endoscope and a high-resolution microendoscope(HRME). Qualitative image features were compared to histology; VFI and HRME images show changes in glandular structure associated with neoplastic progression. Quantitative image features in VFI images were identified for objective image classification of metaplasia and neoplasia, and a diagnostic algorithm was developed using leave-one-out cross validation. Three image features extracted from VFI images were used to classify tissue as neoplastic or not with a sensitivity of 87.8% and a specificity of 77.6% (AUC = 0.878). A multimodal approach incorporating VFI and HRME imaging can delineate epithelial changes present in Barrett's-associated neoplasia. Quantitative analysis of VFI images may provide a means for objective interpretation of BE during surveillance.

  2. Time-gated ballistic imaging using a large aperture switching beam.

    PubMed

    Mathieu, Florian; Reddemann, Manuel A; Palmer, Johannes; Kneer, Reinhold

    2014-03-24

    Ballistic imaging commonly denotes the formation of line-of-sight shadowgraphs through turbid media by suppression of multiply scattered photons. The technique relies on a femtosecond laser acting as light source for the images and as switch for an optical Kerr gate that separates ballistic photons from multiply scattered ones. The achievable image resolution is one major limitation for the investigation of small objects. In this study, practical influences on the optical Kerr gate and image quality are discussed theoretically and experimentally applying a switching beam with large aperture (D = 19 mm). It is shown how switching pulse energy and synchronization of switching and imaging pulse in the Kerr cell influence the gate's transmission. Image quality of ballistic imaging and standard shadowgraphy is evaluated and compared, showing that the present ballistic imaging setup is advantageous for optical densities in the range of 8 < OD < 13. Owing to the spatial transmission characteristics of the optical Kerr gate, a rectangular aperture stop is formed, which leads to different resolution limits for vertical and horizontal structures in the object. Furthermore, it is reported how to convert the ballistic imaging setup into a schlieren-type system with an optical schlieren edge.

  3. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images

    NASA Astrophysics Data System (ADS)

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-10-01

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.

  4. A multi-resolution strategy for a multi-objective deformable image registration framework that accommodates large anatomical differences

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan

    2014-03-01

    Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.

  5. Blind subjects construct conscious mental images of visual scenes encoded in musical form.

    PubMed Central

    Cronly-Dillon, J; Persaud, K C; Blore, R

    2000-01-01

    Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637

  6. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  7. Interferometry in the era of time-domain astronomy

    NASA Astrophysics Data System (ADS)

    Schaefer, Gail H.; Cassan, Arnaud; Gallenne, Alexandre; Roettenbacher, Rachael M.; Schneider, Jean

    2018-04-01

    The physical nature of time variable objects is often inferred from photometric light-curves and spectroscopic variations. Long-baseline optical interferometry has the power to resolve the spatial structure of time variable sources directly in order to measure their physical properties and test the physics of the underlying models. Recent interferometric studies of variable objects include measuring the angular expansion and spatial structure during the early stages of novae outbursts, studying the transits and tidal distortions of the components in eclipsing and interacting binaries, measuring the radial pulsations in Cepheid variables, monitoring changes in the circumstellar discs around rapidly rotating massive stars, and imaging starspots. Future applications include measuring the image size and centroid displacements in gravitational microlensing events, and imaging the transits of exoplanets. Ongoing and upcoming photometric surveys will dramatically increase the number of time-variable objects detected each year, providing many potential targets to observe interferometrically. For short-lived transient events, it is critical for interferometric arrays to have the flexibility to respond rapidly to targets of opportunity and optimize the selection of baselines and beam combiners to provide the necessary resolution and sensitivity to resolve the source as its brightness and size change. We discuss the science opportunities made possible by resolving variable sources using long baseline optical interferometry.

  8. Clinical comparative study with a large-area amorphous silicon flat-panel detector: image quality and visibility of anatomic structures on chest radiography.

    PubMed

    Fink, Christian; Hallscheidt, Peter J; Noeldge, Gerd; Kampschulte, Annette; Radeleff, Boris; Hosch, Waldemar P; Kauffmann, Günter W; Hansmann, Jochen

    2002-02-01

    The objective of this study was to compare clinical chest radiographs of a large-area, flat-panel digital radiography system and a conventional film-screen radiography system. The comparison was based on an observer preference study of image quality and visibility of anatomic structures. Routine follow-up chest radiographs were obtained from 100 consecutive oncology patients using a large-area, amorphous silicon flat-panel detector digital radiography system (dose equivalent to a 400-speed film system). Hard-copy images were compared with previous examinations of the same individuals taken on a conventional film-screen system (200-speed). Patients were excluded if changes in the chest anatomy were detected or if the time interval between the examinations exceeded 1 year. Observer preference was evaluated for the image quality and the visibility of 15 anatomic structures using a five-point scale. Dose measurements with a chest phantom showed a dose reduction of approximately 50% with the digital radiography system compared with the film-screen radiography system. The image quality and the visibility of all but one anatomic structure of the images obtained with the digital flat-panel detector system were rated significantly superior (p < or = 0.0003) to those obtained with the conventional film-screen radiography system. The image quality and visibility of anatomic structures on the images obtained by the flat-panel detector system were perceived as equal or superior to the images from conventional film-screen chest radiography. This was true even though the radiation dose was reduced approximately 50% with the digital flat-panel detector system.

  9. Fundamental quantum noise mapping with tunnelling microscopes tested at surface structures of subatomic lateral size.

    PubMed

    Herz, Markus; Bouvron, Samuel; Ćavar, Elizabeta; Fonin, Mikhail; Belzig, Wolfgang; Scheer, Elke

    2013-10-21

    We present a measurement scheme that enables quantitative detection of the shot noise in a scanning tunnelling microscope while scanning the sample. As test objects we study defect structures produced on an iridium single crystal at low temperatures. The defect structures appear in the constant current images as protrusions with curvature radii well below the atomic diameter. The measured power spectral density of the noise is very near to the quantum limit with Fano factor F = 1. While the constant current images show detailed structures expected for tunnelling involving d-atomic orbitals of Ir, we find the current noise to be without pronounced spatial variation as expected for shot noise arising from statistically independent events.

  10. Real-world visual statistics and infants' first-learned object names

    PubMed Central

    Clerkin, Elizabeth M.; Hart, Elizabeth; Rehg, James M.; Yu, Chen

    2017-01-01

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872373

  11. Imaging Variable Stars with HST

    NASA Astrophysics Data System (ADS)

    Karovska, Margarita

    2011-05-01

    The Hubble Space Telescope (HST) observations of astronomical sources, ranging from objects in our solar system to objects in the early Universe, have revolutionized our knowledge of the Universe its origins and contents.I will highlight results from HST observations of variable stars obtained during the past twenty or so years. Multiwavelength observations of numerous variable stars and stellar systems were obtained using the superb HST imaging capabilities and its unprecedented angular resolution, especially in the UV and optical. The HST provided the first detailed images probing the structure of variable stars including their atmospheres and circumstellar environments. AAVSO observations and light curves have been critical for scheduling of many of these observations and provided important information and context for understanding of the imaging results of many variable sources. I will describe the scientific results from the imaging observations of variable stars including AGBs, Miras, Cepheids, semi-regular variables (including supergiants and giants), YSOs and interacting stellar systems with a variable stellar components. These results have led to an unprecedented understanding of the spatial and temporal characteristics of these objects and their place in the stellar evolutionary chains, and in the larger context of the dynamic evolving Universe.

  12. Imaging Variable Stars with HST

    NASA Astrophysics Data System (ADS)

    Karovska, M.

    2012-06-01

    (Abstract only) The Hubble Space Telescope (HST) observations of astronomical sources, ranging from objects in our solar system to objects in the early Universe, have revolutionized our knowledge of the Universe its origins and contents. I highlight results from HST observations of variable stars obtained during the past twenty or so years. Multiwavelength observations of numerous variable stars and stellar systems were obtained using the superb HST imaging capabilities and its unprecedented angular resolution, especially in the UV and optical. The HST provided the first detailed images probing the structure of variable stars including their atmospheres and circumstellar environments. AAVSO observations and light curves have been critical for scheduling of many of these observations and provided important information and context for understanding of the imaging results of many variable sources. I describe the scientific results from the imaging observations of variable stars including AGBs, Miras, Cepheids, semiregular variables (including supergiants and giants), YSOs and interacting stellar systems with a variable stellar components. These results have led to an unprecedented understanding of the spatial and temporal characteristics of these objects and their place in the stellar evolutionary chains, and in the larger context of the dynamic evolving Universe.

  13. Camera system for multispectral imaging of documents

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Boydston, Kenneth; France, Fenella G.; Knox, Keith T.; Easton, Roger L., Jr.; Toth, Michael B.

    2009-02-01

    A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.

  14. Use of micro computed-tomography and 3D printing for reverse engineering of mouse embryo nasal capsule

    NASA Astrophysics Data System (ADS)

    Tesařová, M.; Zikmund, T.; Kaucká, M.; Adameyko, I.; Jaroš, J.; Paloušek, D.; Škaroupka, D.; Kaiser, J.

    2016-03-01

    Imaging of increasingly complex cartilage in vertebrate embryos is one of the key tasks of developmental biology. This is especially important to study shape-organizing processes during initial skeletal formation and growth. Advanced imaging techniques that are reflecting biological needs give a powerful impulse to push the boundaries of biological visualization. Recently, techniques for contrasting tissues and organs have improved considerably, extending traditional 2D imaging approaches to 3D . X-ray micro computed tomography (μCT), which allows 3D imaging of biological objects including their internal structures with a resolution in the micrometer range, in combination with contrasting techniques seems to be the most suitable approach for non-destructive imaging of embryonic developing cartilage. Despite there are many software-based ways for visualization of 3D data sets, having a real solid model of the studied object might give novel opportunities to fully understand the shape-organizing processes in the developing body. In this feasibility study we demonstrated the full procedure of creating a real 3D object of mouse embryo nasal capsule, i.e. the staining, the μCT scanning combined by the advanced data processing and the 3D printing.

  15. Metal artifact reduction for CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Martz, Harry; Cosman, Pamela

    2015-01-01

    In aviation security, checked luggage is screened by computed tomography scanning. Metal objects in the bags create artifacts that degrade image quality. Though there exist metal artifact reduction (MAR) methods mainly in medical imaging literature, they require knowledge of the materials in the scan, or are outlier rejection methods. To improve and evaluate a MAR method we previously introduced, that does not require knowledge of the materials in the scan, and gives good results on data with large quantities and different kinds of metal. We describe in detail an optimization which de-emphasizes metal projections and has a constraint for beam hardening and scatter. This method isolates and reduces artifacts in an intermediate image, which is then fed to a previously published sinogram replacement method. We evaluate the algorithm for luggage data containing multiple and large metal objects. We define measures of artifact reduction, and compare this method against others in MAR literature. Metal artifacts were reduced in our test images, even for multiple and large metal objects, without much loss of structure or resolution. Our MAR method outperforms the methods with which we compared it. Our approach does not make assumptions about image content, nor does it discard metal projections.

  16. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  17. Labeling Actors and Uncovering Causal Accounts of Their States in Social Networks and Social Media

    ERIC Educational Resources Information Center

    Bui, Ngot P.

    2016-01-01

    The emergence of social networks and social media has resulted in exponential increase in the amount of data that link diverse types of richly structured digital objects e.g., individuals, articles, images, videos, music, etc. Such data are naturally represented as heterogeneous networks with multiple types of objects e.g., actors, video,…

  18. Method of center localization for objects containing concentric arcs

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.

    2015-02-01

    This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.

  19. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  20. One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.

    PubMed

    Biswas, Sujoy Kumar; Milanfar, Peyman

    2016-03-01

    One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.

  1. Change of spatial information under rescaling: A case study using multi-resolution image series

    NASA Astrophysics Data System (ADS)

    Chen, Weirong; Henebry, Geoffrey M.

    Spatial structure in imagery depends on a complicated interaction between the observational regime and the types and arrangements of entities within the scene that the image portrays. Although block averaging of pixels has commonly been used to simulate coarser resolution imagery, relatively little attention has been focused on the effects of simple rescaling on spatial structure and the explanation and a possible solution to the problem. Yet, if there are significant differences in spatial variance between rescaled and observed images, it may affect the reliability of retrieved biogeophysical quantities. To investigate these issues, a nested series of high spatial resolution digital imagery was collected at a research site in eastern Nebraska in 2001. An airborne Kodak DCS420IR camera acquired imagery at three altitudes, yielding nominal spatial resolutions ranging from 0.187 m to 1 m. The red and near infrared (NIR) bands of the co-registered image series were normalized using pseudo-invariant features, and the normalized difference vegetation index (NDVI) was calculated. Plots of grain sorghum planted in orthogonal crop row orientations were extracted from the image series. The finest spatial resolution data were then rescaled by averaging blocks of pixels to produce a rescaled image series that closely matched the spatial resolution of the observed image series. Spatial structures of the observed and rescaled image series were characterized using semivariogram analysis. Results for NDVI and its component bands show, as expected, that decreasing spatial resolution leads to decreasing spatial variability and increasing spatial dependence. However, compared to the observed data, the rescaled images contain more persistent spatial structure that exhibits limited variation in both spatial dependence and spatial heterogeneity. Rescaling via simple block averaging fails to consider the effect of scene object shape and extent on spatial information. As the features portrayed by pixels are equally weighted regardless of the shape and extent of the underlying scene objects, the rescaled image retains more of the original spatial information than would occur through direct observation at a coarser sensor spatial resolution. In contrast, for the observed images, due to the effect of the modulation transfer function (MTF) of the imaging system, high frequency features like edges are blurred or lost as the pixel size increases, resulting in greater variation in spatial structure. Successive applications of a low-pass spatial convolution filter are shown to mimic a MTF. Accordingly, it is recommended that such a procedure be applied prior to rescaling by simple block averaging, if insufficient image metadata exist to replicate the net MTF of the imaging system, as might be expected in land cover change analysis studies using historical imagery.

  2. A Multiscale Vision Model applied to analyze EIT images of the solar corona

    NASA Astrophysics Data System (ADS)

    Portier-Fozzani, F.; Vandame, B.; Bijaoui, A.; Maucherat, A. J.; EIT Team

    2001-07-01

    The large dynamic range provided by the SOHO/EIT CCD (1 : 5000) is needed to observe the large EUV zoom of coronal structures from coronal homes up to flares. Histograms show that often a wide dynamic range is present in each image. Extracting hidden structures in the background level requires specific techniques such as the use of the Multiscale Vision Model (MVM, Bijaoui et al., 1998). This method, based on wavelet transformations optimizes detection of various size objects, however complex they may be. Bijaoui et al. built the Multiscale Vision Model to extract small dynamical structures from noise, mainly for studying galaxies. In this paper, we describe requirements for the use of this method with SOHO/EIT images (calibration, size of the image, dynamics of the subimage, etc.). Two different areas were studied revealing hidden structures: (1) classical coronal mass ejection (CME) formation and (2) a complex group of active regions with its evolution. The aim of this paper is to define carefully the constraints for this new method of imaging the solar corona with SOHO/EIT. Physical analysis derived from multi-wavelength observations will later complete these first results.

  3. Fractal analysis of the susceptibility weighted imaging patterns in malignant brain tumors during antiangiogenic treatment: technical report on four cases serially imaged by 7 T magnetic resonance during a period of four weeks.

    PubMed

    Di Ieva, Antonio; Matula, Christian; Grizzi, Fabio; Grabner, Günther; Trattnig, Siegfried; Tschabitscher, Manfred

    2012-01-01

    The need for new and objective indexes for the neuroradiologic follow-up of brain tumors and for monitoring the effects of antiangiogenic strategies in vivo led us to perform a technical study on four patients who received computerized analysis of tumor-associated vasculature with ultra-high-field (7 T) magnetic resonance imaging (MRI). The image analysis involved the application of susceptibility weighted imaging (SWI) to evaluate vascular structures. Four patients affected by recurrent malignant brain tumors were enrolled in the present study. After the first 7-T SWI MRI procedure, the patients underwent antiangiogenic treatment with bevacizumab. The imaging was repeated every 2 weeks for a period of 4 weeks. The SWI patterns visualized in the three MRI temporal sequences were analyzed by means of a computer-aided fractal-based method to objectively quantify their geometric complexity. In two clinically deteriorating patients we found an increase of the geometric complexity of the space-filling properties of the SWI patterns over time despite the antiangiogenic treatment. In one patient, who showed improvement with the therapy, the fractal dimension of the intratumoral structure decreased, whereas in the fourth patient, no differences were found. The qualitative changes of the intratumoral SWI patterns during a period of 4 weeks were quantified with the fractal dimension. Because SWI patterns are also related to the presence of vascular structures, the quantification of their space-filling properties with fractal dimension seemed to be a valid tool for the in vivo neuroradiologic follow-up of brain tumors. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Buried object remote detection technology for law enforcement

    NASA Astrophysics Data System (ADS)

    del Grande, Nancy K.; Clark, Gregory A.; Durbin, Philip F.; Fields, David J.; Hernandez, Jose E.; Sherwood, Robert J.

    1991-08-01

    A precise airborne temperature-sensing technology to detect buried objects for use by law enforcement is developed. Demonstrations have imaged the sites of buried foundations, walls and trenches; mapped underground waterways and aquifers; and been used to locate underground military objects. The methodology is incorporated in a commercially available, high signal-to-noise, dual-band infrared scanner with real-time, 12-bit digital image processing software and display. The method creates color-coded images based on surface temperature variations of 0.2 degree(s)C. Unlike other less-sensitive methods, it maps true (corrected) temperatures by removing the (decoupled) surface emissivity mask equivalent to 1 degree(s)C or 2 degree(s)C; this mask hinders interpretation of apparent (blackbody) temperatures. Once removed, it is possible to identify surface temperature patterns from small diffusivity changes at buried object sites which heat and cool differently from their surroundings. Objects made of different materials and buried at different depths are identified by their unique spectral, spatial, thermal, temporal, emissivity and diffusivity signatures. The authors have successfully located the sites of buried (inert) simulated land mines 0.1 to 0.2 m deep; sod-covered rock pathways alongside dry ditches, deeper than 0.2 m; pavement covered burial trenches and cemetery structures as deep as 0.8 m; and aquifers more than 6 m and less than 60 m deep. The technology could be adapted for drug interdiction and pollution control. For the former, buried tunnels, underground structures built beneath typical surface structures, roof-tops disguised by jungle canopies, and covered containers used for contraband would be located. For the latter, buried waste containers, sludge migration pathways from faulty containers, and the juxtaposition of groundwater channels, if present, nearby, would be depicted. The precise airborne temperature-sensing technology has a promising potential to detect underground epicenters of smuggling and pollution.

  5. Multiscale Medical Image Fusion in Wavelet Domain

    PubMed Central

    Khare, Ashish

    2013-01-01

    Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868

  6. Extraction and Analysis of Major Autumn Crops in Jingxian County Based on Multi - Temporal gf - 1 Remote Sensing Image and Object-Oriented

    NASA Astrophysics Data System (ADS)

    Ren, B.; Wen, Q.; Zhou, H.; Guan, F.; Li, L.; Yu, H.; Wang, Z.

    2018-04-01

    The purpose of this paper is to provide decision support for the adjustment and optimization of crop planting structure in Jingxian County. The object-oriented information extraction method is used to extract corn and cotton from Jingxian County of Hengshui City in Hebei Province, based on multi-period GF-1 16-meter images. The best time of data extraction was screened by analyzing the spectral characteristics of corn and cotton at different growth stages based on multi-period GF-116-meter images, phenological data, and field survey data. The results showed that the total classification accuracy of corn and cotton was up to 95.7 %, the producer accuracy was 96 % and 94 % respectively, and the user precision was 95.05 % and 95.9 % respectively, which satisfied the demand of crop monitoring application. Therefore, combined with multi-period high-resolution images and object-oriented classification can be a good extraction of large-scale distribution of crop information for crop monitoring to provide convenient and effective technical means.

  7. Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Korecki, P; Roszczynialski, T P; Sowa, K M

    2015-04-06

    In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.

  8. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  9. Millimeter wave scattering characteristics and radar cross section measurements of common roadway objects

    NASA Astrophysics Data System (ADS)

    Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack

    1995-12-01

    Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.

  10. Recent advances in standards for collaborative Digital Anatomic Pathology

    PubMed Central

    2011-01-01

    Context Collaborative Digital Anatomic Pathology refers to the use of information technology that supports the creation and sharing or exchange of information, including data and images, during the complex workflow performed in an Anatomic Pathology department from specimen reception to report transmission and exploitation. Collaborative Digital Anatomic Pathology can only be fully achieved using medical informatics standards. The goal of the international integrating the Healthcare Enterprise (IHE) initiative is precisely specifying how medical informatics standards should be implemented to meet specific health care needs and making systems integration more efficient and less expensive. Objective To define the best use of medical informatics standards in order to share and exchange machine-readable structured reports and their evidences (including whole slide images) within hospitals and across healthcare facilities. Methods Specific working groups dedicated to Anatomy Pathology within multiple standards organizations defined standard-based data structures for Anatomic Pathology reports and images as well as informatic transactions in order to integrate Anatomic Pathology information into the electronic healthcare enterprise. Results The DICOM supplements 122 and 145 provide flexible object information definitions dedicated respectively to specimen description and Whole Slide Image acquisition, storage and display. The content profile “Anatomic Pathology Structured Report” (APSR) provides standard templates for structured reports in which textual observations may be bound to digital images or regions of interest. Anatomic Pathology observations are encoded using an international controlled vocabulary defined by the IHE Anatomic Pathology domain that is currently being mapped to SNOMED CT concepts. Conclusion Recent advances in standards for Collaborative Digital Anatomic Pathology are a unique opportunity to share or exchange Anatomic Pathology structured reports that are interoperable at an international level. The use of machine-readable format of APSR supports the development of decision support as well as secondary use of Anatomic Pathology information for epidemiology or clinical research. PMID:21489187

  11. Database Technology Activities and Assessment for Defense Modeling and Simulation Office (DMSO) (August 1991-November 1992). A Documented Briefing

    DTIC Science & Technology

    1994-01-01

    databases and identifying new data entities, data elements, and relationships . - Standard data naming conventions, schema, and definition processes...management system. The use of such a tool could offer: (1) structured support for representation of objects and their relationships to each other (and...their relationships to related multimedia objects such as an engineering drawing of the tank object or a satellite image that contains the installation

  12. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model

    PubMed Central

    Gordon, J.A.; Freedman, B.R.; Zuskov, A.; Iozzo, R.V.; Birk, D.E.; Soslowsky, L.J.

    2015-01-01

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn−/−) and biglycan-null (Bgn−/−) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. PMID:25888014

  13. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model.

    PubMed

    Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J

    2015-07-16

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Disocclusion: a variational approach using level lines.

    PubMed

    Masnou, Simon

    2002-01-01

    Object recognition, robot vision, image and film restoration may require the ability to perform disocclusion. We call disocclusion the recovery of occluded areas in a digital image by interpolation from their vicinity. It is shown in this paper how disocclusion can be performed by means of the level-lines structure, which offers a reliable, complete and contrast-invariant representation of images. Level-lines based disocclusion yields a solution that may have strong discontinuities. The proposed method is compatible with Kanizsa's amodal completion theory.

  15. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  16. [Design of a Component and Transmission Imaging Spectrometer].

    PubMed

    Sun, Bao-peng; Zhang, Yi; Yue, Jiang; Han, Jing; Bai, Lian-fa

    2015-05-01

    In the reflection-based imaging spectrometer, multiple reflection(diffraction) produces stray light and it is difficult to assemble. To address that, a high performance transmission spectral imaging system based on general optical components was developed. On the basis of simple structure, the system is easy to assemble. And it has wide application and low cost compared to traditional imaging spectrometers. All components in the design can be replaced according to different application situations, having high degree of freedom. In order to reduce the influence of stray light, a method based on transmission was introduced. Two sets of optical systems with different objective lenses were simulated; the parameters such as distortion, MTF and aberration.were analyzed and optimized in the ZEMAX software. By comparing the performance of system with different objective len 25 and 50 mm, it can be concluded that the replacement of telescope lens has little effect on imaging quality of whole system. An imaging spectrometer is developed successfully according design parameters. The telescope lens uses double Gauss structures, which is beneficial to reduce field curvature and distortion. As the craftsmanship of transmission-type plane diffraction grating is mature, it can be used without modification and it is easy to assemble, so it is used as beam-split. component of the imaging spectrometer. In addition, the real imaging spectrometer was tested for spectral resolution and distortion. The result demonstrates that the system has good ability in distortion control, and spectral resolution is 2 nm. These data satisfy the design requirement, and obtained spectrum of deuterium lamp through calibrated system are ideal results.

  17. Structure Constraints in a Constraint-Based Planner

    NASA Technical Reports Server (NTRS)

    Pang, Wan-Lin; Golden, Keith

    2004-01-01

    In this paper we report our work on a new constraint domain, where variables can take structured values. Earth-science data processing (ESDP) is a planning domain that requires the ability to represent and reason about complex constraints over structured data, such as satellite images. This paper reports on a constraint-based planner for ESDP and similar domains. We discuss our approach for translating a planning problem into a constraint satisfaction problem (CSP) and for representing and reasoning about structured objects and constraints over structures.

  18. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    PubMed Central

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  19. A software package to improve image quality and isolation of objects of interest for quantitative stereology studies of rat hepatocarcinogenesis.

    PubMed

    Xu, Yihua; Pitot, Henry C

    2006-03-01

    In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.

  20. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  1. High resolution microscopy of the lipid layer of the tear film.

    PubMed

    King-Smith, P Ewen; Nichols, Jason J; Braun, Richard J; Nichols, Kelly K

    2011-10-01

    Tear film evaporation is controlled by the lipid layer and is an important factor in dry eye conditions. Because the barrier to evaporation depends on the structure of the lipid layer, a high resolution microscope has been constructed to study the lipid layer in dry and in normal eyes. The microscope incorporates the following features. First, a long working distance microscope objective is used with a high numerical aperture and resolution. Second, because such a high resolution objective has limited depth of focus, 2000 images are recorded with a video camera over a 20-sec period, with the expectation that some images will be in focus. Third, illumination is from a stroboscopic light source having a brief flash duration, to avoid blurring from movement of the lipid layer. Fourth, the image is in focus when the edge of the image is sharp - this feature is used to select images in good focus. Fifth, an aid is included to help align the cornea at normal incidence to the axis of the objective so that the whole lipid image can be in focus. High resolution microscopy has the potential to elucidate several characteristics of the normal and abnormal lipid layer, including different objects and backgrounds, changes in the blink cycle, stability and fluidity, dewetting, gel-like properties and possible relation to lipid domains. It is expected that high resolution microscopy of the lipid layer will provide information about the mechanisms of dry eye disorders. Illustrative results are presented, derived from over 10,000 images from 375 subjects.

  2. Fabrication of an X-Ray Imaging Detector

    NASA Technical Reports Server (NTRS)

    Alcorn, G. E.; Burgess, A. S.

    1986-01-01

    X-ray detector array yields mosaic image of object emitting 1- to 30-keV range fabricated from n-doped silicon wafer. In proposed fabrication technique, thin walls of diffused n+ dopant divide wafer into pixels of rectangular cross section, each containing central electrode of thermally migrated p-type metal. This pnn+ arrangement reduces leakage current by preventing transistor action caused by pnp structure of earlier version.

  3. Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.

    PubMed

    Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng

    2017-03-01

    Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.

  4. Advanced Image Enhancement Method for Distant Vessels and Structures in Capsule Endoscopy

    PubMed Central

    Pedersen, Marius

    2017-01-01

    This paper proposes an advanced method for contrast enhancement of capsule endoscopic images, with the main objective to obtain sufficient information about the vessels and structures in more distant (or darker) parts of capsule endoscopic images. The proposed method (PM) combines two algorithms for the enhancement of darker and brighter areas of capsule endoscopic images, respectively. The half-unit weighted-bilinear algorithm (HWB) proposed in our previous work is used to enhance darker areas according to the darker map content of its HSV's component V. Enhancement of brighter areas is achieved thanks to the novel threshold weighted-bilinear algorithm (TWB) developed to avoid overexposure and enlargement of specular highlight spots while preserving the hue, in such areas. The TWB performs enhancement operations following a gradual increment of the brightness of the brighter map content of its HSV's component V. In other words, the TWB decreases its averaged weights as the intensity content of the component V increases. Extensive experimental demonstrations were conducted, and, based on evaluation of the reference and PM enhanced images, a gastroenterologist (Ø.H.) concluded that the PM enhanced images were the best ones based on the information about the vessels, contrast in the images, and the view or visibility of the structures in more distant parts of the capsule endoscopy images. PMID:29225668

  5. Characterization of steel rebar spacing using synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Hu, Jie; Tang, Qixiang; Twumasi, Jones Owusu; Yu, Tzuyang

    2018-03-01

    Steel rebars is a vital component in reinforced concrete (RC) and prestressed concrete structures since they provide mechanical functions to those structures. Damages occurred to steel rebars can lead to the premature failure of concrete structures. Characterization of steel rebars using nondestructive evaluation (NDE) offers engineers and decision makers important information for effective/good repair of aging concrete structures. Among existing NDE techniques, microwave/radar NDE has been proven to be a promising technique for surface and subsurface sensing of concrete structures. The objective of this paper is to use microwave/radar NDE to characterize steel rebar grids in free space, as a basis for the subsurface sensing of steel rebars inside RC structures. A portable 10-GHz radar system based on synthetic aperture radar (SAR) imaging was used in this paper. Effect of rebar grid spacing was considered and used to define subsurface steel rebar grids. Five rebar grid spacings were used; 12.7 cm (5 in.), 17.78 cm (7 in.), 22.86 cm (9 in.), 27.94 cm (11 in.), and 33.02 cm (13 in.) # 3 rebars were used in all grid specimens. All SAR images were collected inside an anechoic chamber. It was found that SAR images can successfully capture the change of rebar grid spacing and used for quantifying the spacing of rebar grids. Empirical models were proposed to estimate actual rebar spacing and contour area using SAR images.

  6. Forward scattering effects on muon imaging

    NASA Astrophysics Data System (ADS)

    Gómez, H.; Gibert, D.; Goy, C.; Jourde, K.; Karyotakis, Y.; Katsanevas, S.; Marteau, J.; Rosas-Carbajal, M.; Tonazzo, A.

    2017-12-01

    Muon imaging is one of the most promising non-invasive techniques for density structure scanning, specially for large objects reaching the kilometre scale. It has already interesting applications in different fields like geophysics or nuclear safety and has been proposed for some others like engineering or archaeology. One of the approaches of this technique is based on the well-known radiography principle, by reconstructing the incident direction of the detected muons after crossing the studied objects. In this case, muons detected after a previous forward scattering on the object surface represent an irreducible background noise, leading to a bias on the measurement and consequently on the reconstruction of the object mean density. Therefore, a prior characterization of this effect represents valuable information to conveniently correct the obtained results. Although the muon scattering process has been already theoretically described, a general study of this process has been carried out based on Monte Carlo simulations, resulting in a versatile tool to evaluate this effect for different object geometries and compositions. As an example, these simulations have been used to evaluate the impact of forward scattered muons on two different applications of muon imaging: archaeology and volcanology, revealing a significant impact on the latter case. The general way in which all the tools used have been developed can allow to make equivalent studies in the future for other muon imaging applications following the same procedure.

  7. New technologies lead to a new frontier: cognitive multiple data representation

    NASA Astrophysics Data System (ADS)

    Buffat, S.; Liege, F.; Plantier, J.; Roumes, C.

    2005-05-01

    The increasing number and complexity of operational sensors (radar, infrared, hyperspectral...) and availability of huge amount of data, lead to more and more sophisticated information presentations. But one key element of the IMINT line cannot be improved beyond initial system specification: the operator.... In order to overcome this issue, we have to better understand human visual object representation. Object recognition theories in human vision balance between matching 2D templates representation with viewpoint-dependant information, and a viewpoint-invariant system based on structural description. Spatial frequency content is relevant due to early vision filtering. Orientation in depth is an important variable to challenge object constancy. Three objects, seen from three different points of view in a natural environment made the original images in this study. Test images were a combination of spatial frequency filtered original images and an additive contrast level of white noise. In the first experiment, the observer's task was a same versus different forced choice with spatial alternative. Test images had the same noise level in a presentation row. Discrimination threshold was determined by modifying the white noise contrast level by means of an adaptative method. In the second experiment, a repetition blindness paradigm was used to further investigate the viewpoint effect on object recognition. The results shed some light on the human visual system processing of objects displayed under different physical descriptions. This is an important achievement because targets which not always match physical properties of usual visual stimuli can increase operational workload.

  8. Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring

    PubMed Central

    Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos

    2016-01-01

    This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040

  9. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens.

    PubMed

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V

    2015-08-24

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.

  10. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  11. Cascaded plasmonic superlens for far-field imaging with magnification at visible wavelength.

    PubMed

    Li, Huiyu; Fu, Liwei; Frenner, Karsten; Osten, Wolfgang

    2018-04-16

    We experimentally demonstrate a novel design of a cascaded plasmonic superlens, which can directly image subwavelength objects with magnification in the far field at visible wavelengths. The lens consists of two cascaded plasmonic slabs. One is a plasmonic metasurface used for near field coupling, and the other one is a planar plasmonic lens used for phase compensation and thus image magnification. First, we show numerical calculations about the performance of the lens. Based on these results we then describe the fabrication of both sub-structures and their combination. Finally, we demonstrate imaging performance of the lens for a subwavelength double-slit object as an example. The fabricated superlens exhibits a lateral resolution down to 180 nm at a wavelength of 640 nm, as predicted by numerical calculations. This might be the first experimental demonstration in which a planar plasmonic lens is employed for near-field image magnification. Our results could open a way for designing and fabricating novel miniaturized plasmonic superlenses in the future.

  12. Target-locking acquisition with real-time confocal (TARC) microscopy.

    PubMed

    Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A

    2007-07-09

    We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.

  13. Changes in quantitative 3D shape features of the optic nerve head associated with age

    NASA Astrophysics Data System (ADS)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p < 0.05) associations with both age and race after Bonferroni correction. In addition, classifiers were constructed to predict the demographic variables based solely on the eigen structures. These classifiers achieved an area under receiver operating characteristic curve of 0.62 in predicting a binary age variable, 0.52 in predicting gender, and 0.67 in predicting race. The use of objective, quantitative features or eigen structures can reveal hidden relationships between ONH structure and demographics. The use of these features could similarly allow specific aspects of ONH structure to be isolated and associated with the diagnosis of glaucoma, disease progression and outcomes, and genetic factors.

  14. Total variation iterative constraint algorithm for limited-angle tomographic reconstruction of non-piecewise-constant structures

    NASA Astrophysics Data System (ADS)

    Krauze, W.; Makowski, P.; Kujawińska, M.

    2015-06-01

    Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.

  15. Fan in the F Ring

    NASA Image and Video Library

    2010-07-20

    This mosaic of images from NASA Cassini spacecraft depicts fan-like structures in Saturn tenuous F ring. Bright features are also visible near the core of the ring. Such features suggest the existence of additional objects in the F ring.

  16. F Ring Bright Core Clumps

    NASA Image and Video Library

    2010-07-20

    Bright clumps of ring material and a fan-like structure appear near the core of Saturn tenuous F ring in this mosaic of images from NASA Cassini spacecraft. Such features suggest the existence of additional objects in the F ring.

  17. Updating the OMERACT Filter: Implications for imaging and soluble biomarkers

    PubMed Central

    D’Agostino, Maria-Antonietta; Boers, Maarten; Kirwan, John; van der Heijde, Desirée; Østergaard, Mikkel; Schett, Georg; Landewé, Robert B.M.; Maksymowych, Walter P.; Naredo, Esperanza; Dougados, Maxime; Iagnocco, Annamaria; Bingham, Clifton O.; Brooks, Peter; Beaton, Dorcas; Gandjbakhch, Frederique; Gossec, Laure; Guillemin, Francis; Hewlett, Sarah; Kloppenburg, Margreet; March, Lyn; Mease, Philip J; Moller, Ingrid; Simon, Lee S; Singh, Jasvinder A; Strand, Vibeke; Wakefield, Richard J; Wells, George; Tugwell, Peter; Conaghan, Philip G

    2014-01-01

    Objective The OMERACT Filter provides a framework for the validation of outcome measures for use in rheumatology clinical research. However, imaging and biochemical measures may face additional validation challenges due to their technical nature. The Imaging and Soluble Biomarker Session at OMERACT 11 aimed to provide a guide for the iterative development of an imaging or biochemical measurement instrument so it can be used in therapeutic assessment. Methods A hierarchical structure was proposed, reflecting 3 dimensions needed for validating an imaging or biochemical measurement instrument: outcome domain(s), study setting and performance of the instrument. Movement along the axes in any dimension reflects increasing validation. For a given test instrument, the 3-axis structure assesses the extent to which the instrument is a validated measure for the chosen domain, whether it assesses a patient or disease centred-variable, and whether its technical performance is adequate in the context of its application. Some currently used imaging and soluble biomarkers for rheumatoid arthritis, spondyloarthritis and knee osteoarthritis were then evaluated using the original OMERACT filter and the newly proposed structure. Break-out groups critically reviewed the extent to which the candidate biomarkers complied with the proposed step-wise approach, as a way of examining the utility of the proposed 3 dimensional structure. Results Although there was a broad acceptance of the value of the proposed structure in general, some areas for improvement were suggested including clarification of criteria for achieving a certain level of validation and how to deal with extension of the structure to areas beyond clinical trials. Conclusion General support was obtained for a proposed tri-axis structure to assess validation of imaging and soluble biomarkers; nevertheless, additional work is required to better evaluate its place within the OMERACT Filter 2.0. PMID:24584916

  18. A variational image-based approach to the correction of susceptibility artifacts in the alignment of diffusion weighted and structural MRI.

    PubMed

    Tao, Ran; Fletcher, P Thomas; Gerber, Samuel; Whitaker, Ross T

    2009-01-01

    This paper presents a method for correcting the geometric and greyscale distortions in diffusion-weighted MRI that result from inhomogeneities in the static magnetic field. These inhomogeneities may due to imperfections in the magnet or to spatial variations in the magnetic susceptibility of the object being imaged--so called susceptibility artifacts. Echo-planar imaging (EPI), used in virtually all diffusion weighted acquisition protocols, assumes a homogeneous static field, which generally does not hold for head MRI. The resulting distortions are significant, sometimes more than ten millimeters. These artifacts impede accurate alignment of diffusion images with structural MRI, and are generally considered an obstacle to the joint analysis of connectivity and structure in head MRI. In principle, susceptibility artifacts can be corrected by acquiring (and applying) a field map. However, as shown in the literature and demonstrated in this paper, field map corrections of susceptibility artifacts are not entirely accurate and reliable, and thus field maps do not produce reliable alignment of EPIs with corresponding structural images. This paper presents a new, image-based method for correcting susceptibility artifacts. The method relies on a variational formulation of the match between an EPI baseline image and a corresponding T2-weighted structural image but also specifically accounts for the physics of susceptibility artifacts. We derive a set of partial differential equations associated with the optimization, describe the numerical methods for solving these equations, and present results that demonstrate the effectiveness of the proposed method compared with field-map correction.

  19. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  20. When holography meets coherent diffraction imaging.

    PubMed

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner

    2012-12-17

    The phase problem is inherent to crystallographic, astronomical and optical imaging where only the intensity of the scattered signal is detected and the phase information is lost and must somehow be recovered to reconstruct the object's structure. Modern imaging techniques at the molecular scale rely on utilizing novel coherent light sources like X-ray free electron lasers for the ultimate goal of visualizing such objects as individual biomolecules rather than crystals. Here, unlike in the case of crystals where structures can be solved by model building and phase refinement, the phase distribution of the wave scattered by an individual molecule must directly be recovered. There are two well-known solutions to the phase problem: holography and coherent diffraction imaging (CDI). Both techniques have their pros and cons. In holography, the reconstruction of the scattered complex-valued object wave is directly provided by a well-defined reference wave that must cover the entire detector area which often is an experimental challenge. CDI provides the highest possible, only wavelength limited, resolution, but the phase recovery is an iterative process which requires some pre-defined information about the object and whose outcome is not always uniquely-defined. Moreover, the diffraction patterns must be recorded under oversampling conditions, a pre-requisite to be able to solve the phase problem. Here, we report how holography and CDI can be merged into one superior technique: holographic coherent diffraction imaging (HCDI). An inline hologram can be recorded by employing a modified CDI experimental scheme. We demonstrate that the amplitude of the Fourier transform of an inline hologram is related to the complex-valued visibility, thus providing information on both, the amplitude and the phase of the scattered wave in the plane of the diffraction pattern. With the phase information available, the condition of oversampling the diffraction patterns can be relaxed, and the phase problem can be solved in a fast and unambiguous manner. We demonstrate the reconstruction of various diffraction patterns of objects recorded with visible light as well as with low-energy electrons. Although we have demonstrated our HCDI method using laser light and low-energy electrons, it can also be applied to any other coherent radiation such as X-rays or high-energy electrons.

  1. X-ray phase contrast imaging of objects with subpixel-size inhomogeneities: a geometrical optics model.

    PubMed

    Gasilov, Sergei V; Coan, Paola

    2012-09-01

    Several x-ray phase contrast extraction algorithms use a set of images acquired along the rocking curve of a perfect flat analyzer crystal to study the internal structure of objects. By measuring the angular shift of the rocking curve peak, one can determine the local deflections of the x-ray beam propagated through a sample. Additionally, some objects determine a broadening of the crystal rocking curve, which can be explained in terms of multiple refraction of x rays by many subpixel-size inhomogeneities contained in the sample. This fact may allow us to differentiate between materials and features characterized by different refraction properties. In the present work we derive an expression for the beam broadening in the form of a linear integral of the quantity related to statistical properties of the dielectric susceptibility distribution function of the object.

  2. Information system to manage anatomical knowledge and image data about brain

    NASA Astrophysics Data System (ADS)

    Barillot, Christian; Gibaud, Bernard; Montabord, E.; Garlatti, S.; Gauthier, N.; Kanellos, I.

    1994-09-01

    This paper reports about first results obtained in a project aiming at developing a computerized system to manage knowledge about brain anatomy. The emphasis is put on the design of a knowledge base which includes a symbolic model of cerebral anatomical structures (grey nuclei, cortical structures such as gyri and sulci, verntricles, vessels, etc.) and of hypermedia facilities allowing to retrieve and display information associated with the objects (texts, drawings, images). Atlas plates digitized from a stereotactic atlas are also used to provide natural and effective communication means between the user and the system.

  3. Infrared images of distant 3C radio galaxies

    NASA Technical Reports Server (NTRS)

    Eisenhardt, Peter; Chokshi, Arati

    1990-01-01

    J (1.2-micron) and K (2.2 micron) images have been obtained for eight 3CR radio galaxies with redshifts from 0.7 to 1.8. Most of the objects were known to have extended asymmetric optical continuum or line emission aligned with the radio lobe axis. In general, the IR morphologies of these galaxies are just as peculiar as their optical morphologies. For all the galaxies, when asymmetric structure is present in the optical, structure with the same orientation is seen in the IR and must be accounted for in any model to explain the alignment of optical and radio emission.

  4. Differences in body image between anorexics and in-vitro-fertilization patients - a study with Body Grid

    PubMed Central

    Borkenhagen, Ada; Klapp, Burghard F.; Schoeneich, Frank; Brähler, Elmar

    2005-01-01

    Objectives: The purpose of the investigation was to explore the body image disturbance of anorexics and in-vitro-fertilization patients (IvF-patients) with Body Grid and Body Identity Plot. Methods: The paper reports on an empirical study conducted with 32 anorexic patients and 30 IvF-patients. The structure of the body image was derived from the Body Grid, an idiographic approach following the Role Repertory Grid developed by George A. Kelly [17]. The representation of the body image and the degree of body-acceptance is represented graphically. Results: By the Body Grid and Body Identity Plot measures we were able to identify important differences in body image between anorexics and IvF-patients. Conclusion: The tendencies of dissociation in the body image of anorexics which we found must be seen in the sense of a specific body image disturbance which differs significantly from the body-experience profile of IvF-patients. With the grid approach it was possible to elicit the inner structure of body image and determine the acceptance of the body and integration of single body parts. PMID:19742059

  5. Long working distance incoherent interference microscope

    DOEpatents

    Sinclair, Michael B [Albuquerque, NM; De Boer, Maarten P [Albuquerque, NM

    2006-04-25

    A full-field imaging, long working distance, incoherent interference microscope suitable for three-dimensional imaging and metrology of MEMS devices and test structures on a standard microelectronics probe station. A long working distance greater than 10 mm allows standard probes or probe cards to be used. This enables nanometer-scale 3-dimensional height profiles of MEMS test structures to be acquired across an entire wafer while being actively probed, and, optionally, through a transparent window. An optically identical pair of sample and reference arm objectives is not required, which reduces the overall system cost, and also the cost and time required to change sample magnifications. Using a LED source, high magnification (e.g., 50.times.) can be obtained having excellent image quality, straight fringes, and high fringe contrast.

  6. On the Progress of Scanning Transmission Electron Microscopy (STEM) Imaging in a Scanning Electron Microscope.

    PubMed

    Sun, Cheng; Müller, Erich; Meffert, Matthias; Gerthsen, Dagmar

    2018-04-01

    Transmission electron microscopy (TEM) with low-energy electrons has been recognized as an important addition to the family of electron microscopies as it may avoid knock-on damage and increase the contrast of weakly scattering objects. Scanning electron microscopes (SEMs) are well suited for low-energy electron microscopy with maximum electron energies of 30 keV, but they are mainly used for topography imaging of bulk samples. Implementation of a scanning transmission electron microscopy (STEM) detector and a charge-coupled-device camera for the acquisition of on-axis transmission electron diffraction (TED) patterns, in combination with recent resolution improvements, make SEMs highly interesting for structure analysis of some electron-transparent specimens which are traditionally investigated by TEM. A new aspect is correlative SEM, STEM, and TED imaging from the same specimen region in a SEM which leads to a wealth of information. Simultaneous image acquisition gives information on surface topography, inner structure including crystal defects and qualitative material contrast. Lattice-fringe resolution is obtained in bright-field STEM imaging. The benefits of correlative SEM/STEM/TED imaging in a SEM are exemplified by structure analyses from representative sample classes such as nanoparticulates and bulk materials.

  7. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  8. Hubble's View of a Dying Star

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A recent image of a dying star containing strange, complex structures may help explain the death throes of stars and defy our current understanding of physics. The image of protoplanetary nebula IRAS22036+5306 (in the Infrared Astronomical Satellite Point Source Catalog) was taken on Dec. 15, 2001, by the Wide Field and Planetary Camera 2, designed and built by NASA's Jet Propulsion Laboratory, onboard NASA's Hubble Space Telescope. It is one of the best images yet to capture a fleeting period at the end of a Sun-like star's life, called the protoplanetary nebula phase.

    This phase, which looks like a beautiful cloud of glowing gas lit up by ultraviolet light from the star's core, results when a star evolves into a bloated red giant and sheds its outer layers. 'Protoplanetary nebulas are rare objects with short lifetimes,' said JPL astrophysicist Dr. Raghvendra Sahai. 'It has generally been very difficult to obtain images of such objects in which their structure can be resolved in detail.'

    This image is particularly important because it contains a series of what Sahai and his colleagues call 'knotty jets,' blob-like objects emerging along roughly straight lines from the center of the cigar-shaped, bipolar nebula (See insets). There are various theories about what may produce such jets, though it is hard to prove their existence due to their short-lived, episodic nature. Detailed multi-wavelength studies of these nebulas with NASA's Great Observatories are being carried out to understand the nature and origin of these enigmatic jets, and how they may be sculpting shrouds of dying stars into exotic shapes. The Hubble Space Telescope is one of NASA's Great Observatories.

  9. Overcoming Dynamic Disturbances in Imaging Systems

    NASA Technical Reports Server (NTRS)

    Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian

    2000-01-01

    We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optomechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.

  10. Overcoming Dynamic Disturbances in Imaging Systems

    NASA Technical Reports Server (NTRS)

    Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian

    2000-01-01

    We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optormechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by, the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.

  11. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Ting; Kim, Sung; Goyal, Sharad

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintainmore » the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a displacement map was generated. Segmented volumes in the CT images deformed using the displacement field were compared against the manual segmentations in the CBCT images to quantitatively measure the convergence of the shape and the volume. Other image features were also used to evaluate the overall performance of the registration. Results: The algorithm was able to complete the segmentation and registration process within 1 min, and the superimposed clinical objects achieved a volumetric similarity measure of over 90% between the reference and the registered data. Validation results also showed that the proposed registration could accurately trace the deformation inside the target volume with average errors of less than 1 mm. The method had a solid performance in registering the simulated images with up to 20 Hounsfield unit white noise added. Also, the side by side comparison with the original demons algorithm demonstrated its improved registration performance over the local pixel-based registration approaches. Conclusions: Given the strength and efficiency of the algorithm, the proposed method has significant clinical potential to accelerate and to improve the CBCT delineation and targets tracking in online IGRT applications.« less

  12. fMRI evidence for areas that process surface gloss in the human visual cortex

    PubMed Central

    Sun, Hua-Chun; Ban, Hiroshi; Di Luca, Massimiliano; Welchman, Andrew E.

    2015-01-01

    Surface gloss is an important cue to the material properties of objects. Recent progress in the study of macaque’s brain has increased our understating of the areas involved in processing information about gloss, however the homologies with the human brain are not yet fully understood. Here we used human functional magnetic resonance imaging (fMRI) measurements to localize brain areas preferentially responding to glossy objects. We measured cortical activity for thirty-two rendered three-dimensional objects that had either Lambertian or specular surface properties. To control for differences in image structure, we overlaid a grid on the images and scrambled its cells. We found activations related to gloss in the posterior fusiform sulcus (pFs) and in area V3B/KO. Subsequent analysis with Granger causality mapping indicated that V3B/KO processes gloss information differently than pFs. Our results identify a small network of mid-level visual areas whose activity may be important in supporting the perception of surface gloss. PMID:25490434

  13. Non-destructive evaluation of impact damage on carbon fiber laminates: Comparison between ESPI and Shearography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pagliarulo, V., E-mail: v.pagliarulo@isasi.cnr.it; Ferraro, P.; Lopresto, V.

    2016-06-28

    The aim of this paper is to investigate the ability of two different interferometric NDT techniques to detect and evaluate barely visible impact damage on composite laminates. The interferometric techniques allow to investigate large and complex structures. Electronic Speckle Pattern Interferometry (ESPI) works through real-time surface illumination by visible laser (i.e. 532 nm) and the range and the accuracy are related to the wavelength. While the ESPI works with the “classic” holographic configuration, that is reference beam and object beam, the Shearography uses the object image itself as reference: two object images are overlapped creating a shear image. This makes themore » method much less sensitive to external vibrations and noise but with one difference, it measures the first derivative of the displacement. In this work, different specimens at different impact energies have been investigated by means of both methods. The delaminated areas have been estimated and compared.« less

  14. Edge detection

    NASA Astrophysics Data System (ADS)

    Hildreth, E. C.

    1985-09-01

    For both biological systems and machines, vision begins with a large and unwieldly array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step: vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.

  15. Shape-from-silhouette for three-dimensional reconstruction from x-ray radiography

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Ratti, F.; Poletto, L.

    2011-06-01

    We present the application of the shape-from-silhouette algorithm to reconstruct the 3D profile of handworks from a set of X-ray absorption images taken at different angles around the object. The acquisition technique is similar to tomography, but the number of images that are required to reconstruct the 3D appearance is very low compared to tomography, therefore the acquisition time is substantially reduced. Some reference points are placed on a structure corotating with the object and are acquired on the images for calibration and registration. The shape-from-silhouette algorithm gives finally the 3D appearance of the object. We present the analysis of a tin pendant from the Venetic area, VI century b.C., that was completely hidden by corrosion products and solid ground at the moment of the retrieval. The 3D reconstruction shows that the pendant is a very elaborated piece, with two embraced figures that were completely invisible before restoration.

  16. Robust image matching via ORB feature and VFC for mismatch removal

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Fu, Wenxing; Fang, Bin; Hu, Fangyu; Quan, Siwen; Ma, Jie

    2018-03-01

    Image matching is at the base of many image processing and computer vision problems, such as object recognition or structure from motion. Current methods rely on good feature descriptors and mismatch removal strategies for detection and matching. In this paper, we proposed a robust image match approach based on ORB feature and VFC for mismatch removal. ORB (Oriented FAST and Rotated BRIEF) is an outstanding feature, it has the same performance as SIFT with lower cost. VFC (Vector Field Consensus) is a state-of-the-art mismatch removing method. The experiment results demonstrate that our method is efficient and robust.

  17. Visual object agnosia is associated with a breakdown of object-selective responses in the lateral occipital cortex.

    PubMed

    Ptak, Radek; Lazeyras, François; Di Pietro, Marie; Schnider, Armin; Simon, Stéphane R

    2014-07-01

    Patients with visual object agnosia fail to recognize the identity of visually presented objects despite preserved semantic knowledge. Object agnosia may result from damage to visual cortex lying close to or overlapping with the lateral occipital complex (LOC), a brain region that exhibits selectivity to the shape of visually presented objects. Despite this anatomical overlap the relationship between shape processing in the LOC and shape representations in object agnosia is unknown. We studied a patient with object agnosia following isolated damage to the left occipito-temporal cortex overlapping with the LOC. The patient showed intact processing of object structure, yet often made identification errors that were mainly based on the global visual similarity between objects. Using functional Magnetic Resonance Imaging (fMRI) we found that the damaged as well as the contralateral, structurally intact right LOC failed to show any object-selective fMRI activity, though the latter retained selectivity for faces. Thus, unilateral damage to the left LOC led to a bilateral breakdown of neural responses to a specific stimulus class (objects and artefacts) while preserving the response to a different stimulus class (faces). These findings indicate that representations of structure necessary for the identification of objects crucially rely on bilateral, distributed coding of shape features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Non-Invasive Imaging of Reactor Cores Using Cosmic Ray Muons

    NASA Astrophysics Data System (ADS)

    Milner, Edward

    2011-10-01

    Cosmic ray muons penetrate deeply in material, with some passing completely through very thick objects. This penetrating quality is the basis of two distinct, but related imaging techniques. The first measures the number of cosmic ray muons transmitted through parts of an object. Relatively fewer muons are absorbed along paths in which they encounter less material, compared to higher density paths, so the relative density of material is measured. This technique is called muon transmission imaging, and has been used to infer the density and structure of a variety of large masses, including mine overburden, volcanoes, pyramids, and buildings. In a second, more recently developed technique, the angular deflection of muons is measured by trajectory-tracking detectors placed on two opposing sides of an object. Muons are deflected more strongly by heavy nuclei, since multiple Coulomb scattering angle is approximately proportional to the nuclear charge. Therefore, a map showing regions of large deflection will identify the location of uranium in contrast to lighter nuclei. This technique is termed muon scattering tomography (MST) and has been developed to screen shipping containers for the presence of concealed nuclear material. Both techniques are a good way of non-invasively inspecting objects. A previously unexplored topic was applying MST to imaging large objects. Here we demonstrate extending the MST technique to the task of identifying relatively thick objects inside very thick shielding. We measured cosmic ray muons passing through a physical arrangement of material similar to a nuclear reactor, with thick concrete shielding and a heavy metal core. Newly developed algorithms were used to reconstruct an image of the ``mock reactor core,'' with resolution of approximately 30 cm.

  19. Design of a dynamic biofilm imaging cell for white-light interferometric microscopy

    DOE PAGES

    Larimer, Curtis; Brann, Michelle; Suter, Jonathan D.; ...

    2017-05-10

    In microbiology research there is a strong need for next generation imaging and sensing instrumentation that will enable minimally invasive and label-free investigation of soft, hydrated structures such as in bacterial biofilms. White light interferometry (WLI) can provide high resolution images of surface topology without the use of fluorescent labels but is not typically used to image biofilms because there is insufficient refractive index contrast to induce reflection from the biofilm’s interface. The soft structure and water-like bulk properties of hydrated biofilms make them difficult to characterize in situ, especially in a non-destructive manner. In this report, we build onmore » our prior description of static biofilm imaging and describe the design of a dynamic imaging flow cell that enables monitoring the thickness and topology of live biofilms over time using a WLI microscope. The microfluidic system is specifically designed to create a reflective interface on the surface of biofilms while minimizing disruption of fragile structures. The imaging cell was also designed to accommodate limitations imposed by the depth of focus of the microscope’s objective lens. Example images of live biofilm samples are shown in order to illustrate the ability of the flow cell and WLI instrument to 1) support bacterial growth and biofilm development, 2) image biofilm structure that reflects growth in flow conditions, and 3) monitor biofilm development over time non-destructively. In future work, the apparatus described here will enable surface metrology measurements (roughness, surface area, etc.) of biofilms and may be used to observe changes in biofilm structure in response to changes in environmental conditions (e.g., flow velocity, availability of nutrients, and presence of biocides). Furthermore, this development will open new opportunities for the use of WLI in bioimaging.« less

  20. Design of a dynamic biofilm imaging cell for white-light interferometric microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larimer, Curtis; Brann, Michelle; Suter, Jonathan D.

    In microbiology research there is a strong need for next generation imaging and sensing instrumentation that will enable minimally invasive and label-free investigation of soft, hydrated structures such as in bacterial biofilms. White light interferometry (WLI) can provide high resolution images of surface topology without the use of fluorescent labels but is not typically used to image biofilms because there is insufficient refractive index contrast to induce reflection from the biofilm’s interface. The soft structure and water-like bulk properties of hydrated biofilms make them difficult to characterize in situ, especially in a non-destructive manner. In this report, we build onmore » our prior description of static biofilm imaging and describe the design of a dynamic imaging flow cell that enables monitoring the thickness and topology of live biofilms over time using a WLI microscope. The microfluidic system is specifically designed to create a reflective interface on the surface of biofilms while minimizing disruption of fragile structures. The imaging cell was also designed to accommodate limitations imposed by the depth of focus of the microscope’s objective lens. Example images of live biofilm samples are shown in order to illustrate the ability of the flow cell and WLI instrument to 1) support bacterial growth and biofilm development, 2) image biofilm structure that reflects growth in flow conditions, and 3) monitor biofilm development over time non-destructively. In future work, the apparatus described here will enable surface metrology measurements (roughness, surface area, etc.) of biofilms and may be used to observe changes in biofilm structure in response to changes in environmental conditions (e.g., flow velocity, availability of nutrients, and presence of biocides). Furthermore, this development will open new opportunities for the use of WLI in bioimaging.« less

  1. Object Classification in Semi Structured Enviroment Using Forward-Looking Sonar

    PubMed Central

    dos Santos, Matheus; Ribeiro, Pedro Otávio; Núñez, Pedro; Botelho, Silvia

    2017-01-01

    The submarine exploration using robots has been increasing in recent years. The automation of tasks such as monitoring, inspection, and underwater maintenance requires the understanding of the robot’s environment. The object recognition in the scene is becoming a critical issue for these systems. On this work, an underwater object classification pipeline applied in acoustic images acquired by Forward-Looking Sonar (FLS) are studied. The object segmentation combines thresholding, connected pixels searching and peak of intensity analyzing techniques. The object descriptor extract intensity and geometric features of the detected objects. A comparison between the Support Vector Machine, K-Nearest Neighbors, and Random Trees classifiers are presented. An open-source tool was developed to annotate and classify the objects and evaluate their classification performance. The proposed method efficiently segments and classifies the structures in the scene using a real dataset acquired by an underwater vehicle in a harbor area. Experimental results demonstrate the robustness and accuracy of the method described in this paper. PMID:28961163

  2. Performance evaluation of objective quality metrics for HDR image compression

    NASA Astrophysics Data System (ADS)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  3. Near-infrared fluorescence image quality test methods for standardized performance evaluation

    NASA Astrophysics Data System (ADS)

    Kanniyappan, Udayakumar; Wang, Bohan; Yang, Charles; Ghassemi, Pejhman; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua

    2017-03-01

    Near-infrared fluorescence (NIRF) imaging has gained much attention as a clinical method for enhancing visualization of cancers, perfusion and biological structures in surgical applications where a fluorescent dye is monitored by an imaging system. In order to address the emerging need for standardization of this innovative technology, it is necessary to develop and validate test methods suitable for objective, quantitative assessment of device performance. Towards this goal, we develop target-based test methods and investigate best practices for key NIRF imaging system performance characteristics including spatial resolution, depth of field and sensitivity. Characterization of fluorescence properties was performed by generating excitation-emission matrix properties of indocyanine green and quantum dots in biological solutions and matrix materials. A turbid, fluorophore-doped target was used, along with a resolution target for assessing image sharpness. Multi-well plates filled with either liquid or solid targets were generated to explore best practices for evaluating detection sensitivity. Overall, our results demonstrate the utility of objective, quantitative, target-based testing approaches as well as the need to consider a wide range of factors in establishing standardized approaches for NIRF imaging system performance.

  4. A Java application for tissue section image analysis.

    PubMed

    Kamalov, R; Guillaud, M; Haskins, D; Harrison, A; Kemp, R; Chiu, D; Follen, M; MacAulay, C

    2005-02-01

    The medical industry has taken advantage of Java and Java technologies over the past few years, in large part due to the language's platform-independence and object-oriented structure. As such, Java provides powerful and effective tools for developing tissue section analysis software. The background and execution of this development are discussed in this publication. Object-oriented structure allows for the creation of "Slide", "Unit", and "Cell" objects to simulate the corresponding real-world objects. Different functions may then be created to perform various tasks on these objects, thus facilitating the development of the software package as a whole. At the current time, substantial parts of the initially planned functionality have been implemented. Getafics 1.0 is fully operational and currently supports a variety of research projects; however, there are certain features of the software that currently introduce unnecessary complexity and inefficiency. In the future, we hope to include features that obviate these problems.

  5. Faint Object Camera imaging and spectroscopy of NGC 4151

    NASA Technical Reports Server (NTRS)

    Boksenberg, A.; Catchpole, R. M.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1995-01-01

    We describe ultraviolet and optical imaging and spectroscopy within the central few arcseconds of the Seyfert galaxy NGC 4151, obtained with the Faint Object Camera on the Hubble Space Telescope. A narrowband image including (O III) lambda(5007) shows a bright nucleus centered on a complex biconical structure having apparent opening angle approximately 65 deg and axis at a position angle along 65 deg-245 deg; images in bands including Lyman-alpha and C IV lambda(1550) and in the optical continuum near 5500 A, show only the bright nucleus. In an off-nuclear optical long-slit spectrum we find a high and a low radial velocity component within the narrow emission lines. We identify the low-velocity component with the bright, extended, knotty structure within the cones, and the high-velocity component with more confined diffuse emission. Also present are strong continuum emission and broad Balmer emission line components, which we attribute to the extended point spread function arising from the intense nuclear emission. Adopting the geometry pointed out by Pedlar et al. (1993) to explain the observed misalignment of the radio jets and the main optical structure we model an ionizing radiation bicone, originating within a galactic disk, with apex at the active nucleus and axis centered on the extended radio jets. We confirm that through density bounding the gross spatial structure of the emission line region can be reproduced with a wide opening angle that includes the line of sight, consistent with the presence of a simple opaque torus allowing direct view of the nucleus. In particular, our modelling reproduces the observed decrease in position angle with distance from the nucleus, progressing initially from the direction of the extended radio jet, through our optical structure, and on to the extended narrow-line region. We explore the kinematics of the narrow-line low- and high-velocity components on the basis of our spectroscopy and adopted model structure.

  6. Large image microscope array for the compilation of multimodality whole organ image databases.

    PubMed

    Namati, Eman; De Ryk, Jessica; Thiesse, Jacqueline; Towfic, Zaid; Hoffman, Eric; Mclennan, Geoffrey

    2007-11-01

    Three-dimensional, structural and functional digital image databases have many applications in education, research, and clinical medicine. However, to date, apart from cryosectioning, there have been no reliable means to obtain whole-organ, spatially conserving histology. Our aim was to generate a system capable of acquiring high-resolution images, featuring microscopic detail that could still be spatially correlated to the whole organ. To fulfill these objectives required the construction of a system physically capable of creating very fine whole-organ sections and collecting high-magnification and resolution digital images. We therefore designed a large image microscope array (LIMA) to serially section and image entire unembedded organs while maintaining the structural integrity of the tissue. The LIMA consists of several integrated components: a novel large-blade vibrating microtome, a 1.3 megapixel peltier cooled charge-coupled device camera, a high-magnification microscope, and a three axis gantry above the microtome. A custom control program was developed to automate the entire sectioning and automated raster-scan imaging sequence. The system is capable of sectioning unembedded soft tissue down to a thickness of 40 microm at specimen dimensions of 200 x 300 mm to a total depth of 350 mm. The LIMA system has been tested on fixed lung from sheep and mice, resulting in large high-quality image data sets, with minimal distinguishable disturbance in the delicate alveolar structures. Copyright 2007 Wiley-Liss, Inc.

  7. Browsing software of the Visible Korean data used for teaching sectional anatomy.

    PubMed

    Shin, Dong Sun; Chung, Min Suk; Park, Hyo Seok; Park, Jin Seo; Hwang, Sung Bae

    2011-01-01

    The interpretation of computed tomographs (CTs) and magnetic resonance images (MRIs) to diagnose clinical conditions requires basic knowledge of sectional anatomy. Sectional anatomy has traditionally been taught using sectioned cadavers, atlases, and/or computer software. The computer software commonly used for this subject is practical and efficient for students but could be more advanced. The objective of this research was to present browsing software developed from the Visible Korean images that can be used for teaching sectional anatomy. One thousand seven hundred and two sets of MRIs, CTs, and sectioned images (intervals, one millimeter) of a whole male cadaver were prepared. Over 900 structures in the sectioned images were outlined and then filled with different colors to elaborate each structure. Software was developed where four corresponding images could be displayed simultaneously; in addition, the structures in the image data could be readily recognized with the aid of the color-filled outlines. The software, distributed free of charge, could be a valuable tool to teach medical students. For example, sectional anatomy could be taught by showing the sectioned images with real color and high resolution. Students could then review the lecture by using the sectioned and color-filled images on their own computers. Students could also be evaluated using the same software. Furthermore, other investigators would be able to replace the images for more comprehensive sectional anatomy. Copyright © 2011 Wiley-Liss, Inc.

  8. Fluorescence imaging of tryptophan and collagen cross-links to evaluate wound closure ex vivo

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Ortega-Martinez, Antonio; Farinelli, Bill; Anderson, R. R.; Franco, Walfre

    2016-02-01

    Wound size is a key parameter in monitoring healing. Current methods to measure wound size are often subjective, time-consuming and marginally invasive. Recently, we developed a non-invasive, non-contact, fast and simple but robust fluorescence imaging (u-FEI) method to monitor the healing of skin wounds. This method exploits the fluorescence of native molecules to tissue as functional and structural markers. The objective of the present study is to demonstrate the feasibility of using variations in the fluorescence intensity of tryptophan and cross-links of collagen to evaluate proliferation of keratinocyte cells and quantitate size of wound during healing, respectively. Circular dermal wounds were created in ex vivo human skin and cultured in different media. Two serial fluorescence images of tryptophan and collagen cross-links were acquired every two days. Histology and immunohistology were used to validate correlation between fluorescence and epithelialization. Images of collagen cross-links show fluorescence of the exposed dermis and, hence, are a measure of wound area. Images of tryptophan show higher fluorescence intensity of proliferating keratinocytes forming new epithelium, as compared to surrounding keratinocytes not involved in epithelialization. These images are complementary since collagen cross-links report on structure while tryptophan reports on function. HE and immunohistology show that tryptophan fluorescence correlates with newly formed epidermis. We have established a fluorescence imaging method for studying epithelialization processes during wound healing in a skin organ culture model, our approach has the potential to provide a non-invasive, non-contact, quick, objective and direct method for quantitative measurements in wound healing in vivo.

  9. Mars Environmental Survey (MESUR): Science objectives and mission description

    NASA Technical Reports Server (NTRS)

    Hubbard, G. Scott; Wercinski, Paul F.; Sarver, George L.; Hanel, Robert P.; Ramos, Ruben

    1992-01-01

    In-situ observations and measurements of Mars are objectives of a feasibility study beginning at the Ames Research Center for a mission called the Mars Environmental SURvey (MESUR). The purpose of the MESUR mission is to emplace a pole-to-pole global distribution of landers on the Martian surface to make both short- and long-term observations of the atmosphere and surface. The basic concept is to deploy probes which would directly enter the Mars atmosphere, provide measurements of the upper atmospheric structure, image the local terrain before landing, and survive landing to perform meteorology, seismology, surface imaging, and soil chemistry measurements. MESUR is intended to be a relatively low-cost mission to advance both Mars science and human presence objectives. Mission philosophy is to: (1) 'grow' a network over a period of years using a series of launch opportunities, thereby minimizing the peak annual costs; (2) develop a level-of-effort which is flexible and responsive to a broad set of objectives; (3) focus on science while providing a solid basis for human exploration; and (4) minimize project cost and complexity wherever possible. In order to meet the diverse scientific objectives, each MESUR lander will carry the following strawman instrument payload consisting of: (1) Atmospheric structure experiment, (2) Descent and surface imagers, (3) Meteorology package, (4) Elemental composition instrument, (5) 3-axis seismometer, and (6) Thermal analyzer/evolved gas analyzer. The feasibility study is primarily to show a practical way to design an early capability for characterizing Mars' surface and atmospheric environment on a global scale. The goals are to answer some of the most urgent questions to advance significantly our scientific knowledge about Mars, and for planning eventual exploration of the planet by robots and humans.

  10. Integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy for rapid volumetric imaging

    NASA Astrophysics Data System (ADS)

    Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L.; Kozorovitskiy, Yevgenia

    2018-05-01

    Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.

  11. Integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy for rapid volumetric imaging.

    PubMed

    Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L; Kozorovitskiy, Yevgenia

    2018-05-14

    Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi, /sōpī/) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi's flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.

  12. Defeating camouflage and finding explosives through spectral matched filtering of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Dombrowski, Mark S.; Willson, Paul D.; LaBaw, Clayton C.

    1997-01-01

    In order to achieve their goal of surreptitious operation within a country, terrorist organizations attempt to hide themselves from public view. In many instances such masking takes the form of simply appearing like the surrounding populace. In others, such as training facilities, standard military camouflaging techniques are used to conceal the group's equipment and activities. To effectively monitor and suppress activities of terrorist organizations, defeating the groups' attempt to hide is essential. Although finding individuals hiding within a society is extremely problematic, discovering camouflaged equipment, facilities, and personnel is readily accomplished by proper exploitation of hyperspectral imagery. Camouflage techniques attempt to make an object appear similar to its background, thereby making it difficult to find. Although making an object have similar color to its background is fairly easy, making it have the same spectral appearance is nearly impossible, unless the object is covered in the same material as the background. Even attempting to hide an object by covering it in background material will not work against a spectral imager since the act of moving the background material, e.g., foliage cuttings, changes the material's spectral characteristics. Hence, by collecting and properly exploiting spectral imagery, camouflaged objects can be readily differentiated from their background. This paper presents development of this technique, and of the MIDIS (multi-band identification and discrimination imaging spectroradiometer) instrument capable of real-time discrimination of camouflaged objects throughout a scene. Spectral matched-filtering of hyperspectral imagery also has the potential to find vehicles or structures which may be laden with explosives. Many explosives contain volatile materials, the release of which can be imaged by viewing appropriate spectral regions. Volatiles from the fuel oil in readily-produced ANFO are an example. If such volatiles were seen emanating from a vehicle or structure where they would not normally be expected, closer inspection would be warranted. Additionally, packing a vehicle with explosives often leaves trace residues on the outside of the vehicle. Spectral imaging and matched filtering can be used to identify these residues. Incorporation of spectral imaging surveillance equipment at probable terrorist targets could avert disasters such as the tragic bombing of the Murrah Federal Building in Oklahoma City. Application of MIDIS technology to explosive identification is also detailed.

  13. Analysis of image heterogeneity using 2D Minkowski functionals detects tumor responses to treatment.

    PubMed

    Larkin, Timothy J; Canuto, Holly C; Kettunen, Mikko I; Booth, Thomas C; Hu, De-En; Krishnan, Anant S; Bohndiek, Sarah E; Neves, André A; McLachlan, Charles; Hobson, Michael P; Brindle, Kevin M

    2014-01-01

    The acquisition of ever increasing volumes of high resolution magnetic resonance imaging (MRI) data has created an urgent need to develop automated and objective image analysis algorithms that can assist in determining tumor margins, diagnosing tumor stage, and detecting treatment response. We have shown previously that Minkowski functionals, which are precise morphological and structural descriptors of image heterogeneity, can be used to enhance the detection, in T1 -weighted images, of a targeted Gd(3+) -chelate-based contrast agent for detecting tumor cell death. We have used Minkowski functionals here to characterize heterogeneity in T2 -weighted images acquired before and after drug treatment, and obtained without contrast agent administration. We show that Minkowski functionals can be used to characterize the changes in image heterogeneity that accompany treatment of tumors with a vascular disrupting agent, combretastatin A4-phosphate, and with a cytotoxic drug, etoposide. Parameterizing changes in the heterogeneity of T2 -weighted images can be used to detect early responses of tumors to drug treatment, even when there is no change in tumor size. The approach provides a quantitative and therefore objective assessment of treatment response that could be used with other types of MR image and also with other imaging modalities. Copyright © 2013 Wiley Periodicals, Inc.

  14. The influence of respiratory motion on CT image volume definition.

    PubMed

    Rodríguez-Romero, Ruth; Castro-Tejero, Pablo

    2014-04-01

    Radiotherapy treatments are based on geometric and density information acquired from patient CT scans. It is well established that breathing motion during scan acquisition induces motion artifacts in CT images, which can alter the size, shape, and density of a patient's anatomy. The aim of this work is to examine and evaluate the impact of breathing motion on multislice CT imaging with respiratory synchronization (4DCT) and without it (3DCT). A specific phantom with a movable insert was used. Static and dynamic phantom acquisitions were obtained with a multislice CT. Four sinusoidal breath patterns were simulated to move known geometric structures longitudinally. Respiratory synchronized acquisitions (4DCT) were performed to generate images during inhale, intermediate, and exhale phases using prospective and retrospective techniques. Static phantom data were acquired in helical and sequential mode to define a baseline for each type of respiratory 4DCT technique. Taking into account the fact that respiratory 4DCT is not always available, 3DCT helical image studies were also acquired for several CT rotation periods. To study breath and acquisition coupling when respiratory 4DCT was not performed, the beginning of the CT image acquisition was matched with inhale, intermediate, or exhale respiratory phases, for each breath pattern. Other coupling scenarios were evaluated by simulating different phantom and CT acquisition parameters. Motion induced variations in shape and density were quantified by automatic threshold volume generation and Dice similarity coefficient calculation. The structure mass center positions were also determined to make a comparison with their theoretical expected position. 4DCT acquisitions provided volume and position accuracies within ± 3% and ± 2 mm for structure dimensions >2 cm, breath amplitude ≤ 15 mm, and breath period ≥ 3 s. The smallest object (1 cm diameter) exceeded 5% volume variation for the breath patterns of higher frequency and amplitude motion. Larger volume differences (>10%) and inconsistencies between the relative positions of objects were detected in image studies acquired without respiratory control. Increasing the 3DCT rotation period caused a higher distortion in structures without obtaining their envelope. Simulated data showed that the slice acquisition time should be at least twice the breath period to average object movement. Respiratory 4DCT images provide accurate volume and position of organs affected by breath motion detecting higher volume discrepancies as amplitude length or breath frequency are increased. For 3DCT acquisitions, a CT should be considered slow enough to include lesion envelope as long as the slice acquisition time exceeds twice the breathing period. If this requirement cannot be satisfied, a fast CT (along with breath-hold inhale and exhale CTs to estimate roughly the ITV) is recommended in order to minimize structure distortion. Even with an awareness of a patient's respiratory cycle, its coupling with 3DCT acquisition cannot be predicted since patient anatomy is not accurately known. © 2014 American Association of Physicists in Medicine.

  15. The influence of respiratory motion on CT image volume definition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodríguez-Romero, Ruth, E-mail: rrromero@salud.madrid.org; Castro-Tejero, Pablo, E-mail: pablo.castro@salud.madrid.org

    Purpose: Radiotherapy treatments are based on geometric and density information acquired from patient CT scans. It is well established that breathing motion during scan acquisition induces motion artifacts in CT images, which can alter the size, shape, and density of a patient's anatomy. The aim of this work is to examine and evaluate the impact of breathing motion on multislice CT imaging with respiratory synchronization (4DCT) and without it (3DCT). Methods: A specific phantom with a movable insert was used. Static and dynamic phantom acquisitions were obtained with a multislice CT. Four sinusoidal breath patterns were simulated to move knownmore » geometric structures longitudinally. Respiratory synchronized acquisitions (4DCT) were performed to generate images during inhale, intermediate, and exhale phases using prospective and retrospective techniques. Static phantom data were acquired in helical and sequential mode to define a baseline for each type of respiratory 4DCT technique. Taking into account the fact that respiratory 4DCT is not always available, 3DCT helical image studies were also acquired for several CT rotation periods. To study breath and acquisition coupling when respiratory 4DCT was not performed, the beginning of the CT image acquisition was matched with inhale, intermediate, or exhale respiratory phases, for each breath pattern. Other coupling scenarios were evaluated by simulating different phantom and CT acquisition parameters. Motion induced variations in shape and density were quantified by automatic threshold volume generation and Dice similarity coefficient calculation. The structure mass center positions were also determined to make a comparison with their theoretical expected position. Results: 4DCT acquisitions provided volume and position accuracies within ±3% and ±2 mm for structure dimensions >2 cm, breath amplitude ≤15 mm, and breath period ≥3 s. The smallest object (1 cm diameter) exceeded 5% volume variation for the breath patterns of higher frequency and amplitude motion. Larger volume differences (>10%) and inconsistencies between the relative positions of objects were detected in image studies acquired without respiratory control. Increasing the 3DCT rotation period caused a higher distortion in structures without obtaining their envelope. Simulated data showed that the slice acquisition time should be at least twice the breath period to average object movement. Conclusions: Respiratory 4DCT images provide accurate volume and position of organs affected by breath motion detecting higher volume discrepancies as amplitude length or breath frequency are increased. For 3DCT acquisitions, a CT should be considered slow enough to include lesion envelope as long as the slice acquisition time exceeds twice the breathing period. If this requirement cannot be satisfied, a fast CT (along with breath-hold inhale and exhale CTs to estimate roughly the ITV) is recommended in order to minimize structure distortion. Even with an awareness of a patient's respiratory cycle, its coupling with 3DCT acquisition cannot be predicted since patient anatomy is not accurately known.« less

  16. Building and degradation of secondary cell walls: are there common patterns of lamellar assembly of cellulose microfibrils and cell wall delamination?

    PubMed

    De Micco, Veronica; Ruel, Katia; Joseleau, Jean-Paul; Aronne, Giovanna

    2010-08-01

    During cell wall formation and degradation, it is possible to detect cellulose microfibrils assembled into thicker and thinner lamellar structures, respectively, following inverse parallel patterns. The aim of this study was to analyse such patterns of microfibril aggregation and cell wall delamination. The thickness of microfibrils and lamellae was measured on digital images of both growing and degrading cell walls viewed by means of transmission electron microscopy. To objectively detect, measure and classify microfibrils and lamellae into thickness classes, a method based on the application of computerized image analysis combined with graphical and statistical methods was developed. The method allowed common classes of microfibrils and lamellae in cell walls to be identified from different origins. During both the formation and degradation of cell walls, a preferential formation of structures with specific thickness was evidenced. The results obtained with the developed method allowed objective analysis of patterns of microfibril aggregation and evidenced a trend of doubling/halving lamellar structures, during cell wall formation/degradation in materials from different origin and which have undergone different treatments.

  17. Method of optical coherence tomography with parallel depth-resolved signal reception and fibre-optic phase modulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, A N; Turchin, I V

    2013-12-31

    The method of optical coherence tomography with the scheme of parallel reception of the interference signal (P-OCT) is developed on the basis of spatial paralleling of the reference wave by means of a phase diffraction grating producing the appropriate time delay in the Mach–Zehnder interferometer. The absence of mechanical variation of the optical path difference in the interferometer essentially reduces the time required for 2D imaging of the object internal structure, as compared to the classical OCT that uses the time-domain method of the image construction, the sensitivity and the dynamic range being comparable in both approaches. For the resultingmore » field of the interfering object and reference waves an analytical expression is derived that allows the calculation of the autocorrelation function in the plane of photodetectors. For the first time a method of linear phase modulation by 2π is proposed for P-OCT systems, which allows the use of compact high-frequency (a few hundred kHz) piezoelectric cell-based modulators. For the demonstration of the P-OCT method an experimental setup was created, using which the images of the inner structure of biological objects at the depth up to 1 mm with the axial spatial resolution of 12 μm were obtained. (optical coherence tomography)« less

  18. Recognition of lesion correspondence on two mammographic views: a new method of false-positive reduction for computerized mass detection

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Petrick, Nicholas; Chan, Heang-Ping; Paquerault, Sophie; Helvie, Mark A.; Hadjiiski, Lubomir M.

    2001-07-01

    We used the correspondence of detected structures on two views of the same breast for false-positive (FP) reduction in computerized detection of mammographic masses. For each initially detected object on one view, we considered all possible pairings with objects on the other view that fell within a radial band defined by the nipple-to-object distances. We designed a 'correspondence classifier' to classify these pairs as either the same mass (a TP-TP pair) or a mismatch (a TP-FP, FP-TP or FP-FP pair). For each pair, similarity measures of morphological and texture features were derived and used as input features in the correspondence classifier. Two-view mammograms from 94 cases were used as a preliminary data set. Initial detection provided 6.3 FPs/image at 96% sensitivity. Further FP reduction in single view resulted in 1.9 FPs/image at 80% sensitivity and 1.1 FPs/image at 70% sensitivity. By combining single-view detection with the correspondence classifier, detection accuracy improved to 1.5 FPs/image at 80% sensitivity and 0.7 FPs/image at 70% sensitivity. Our preliminary results indicate that the correspondence of geometric, morphological, and textural features of a mass on two different views provides valuable additional information for reducing FPs.

  19. Deep Learning for Lowtextured Image Matching

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.

    2018-05-01

    Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.

  20. Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges

    PubMed Central

    Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin

    2017-01-01

    The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950

Top