Sample records for imagers captured multiple

  1. Systems and Methods for Imaging of Falling Objects

    NASA Technical Reports Server (NTRS)

    Fallgatter, Cale (Inventor); Garrett, Tim (Inventor)

    2014-01-01

    Imaging of falling objects is described. Multiple images of a falling object can be captured substantially simultaneously using multiple cameras located at multiple angles around the falling object. An epipolar geometry of the captured images can be determined. The images can be rectified to parallelize epipolar lines of the epipolar geometry. Correspondence points between the images can be identified. At least a portion of the falling object can be digitally reconstructed using the identified correspondence points to create a digital reconstruction.

  2. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  3. Using focused plenoptic cameras for rich image capture.

    PubMed

    Georgiev, T; Lumsdaine, A; Chunev, G

    2011-01-01

    This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.

  4. Three-dimensional fluorescent microscopy via simultaneous illumination and detection at multiple planes.

    PubMed

    Ma, Qian; Khademhosseinieh, Bahar; Huang, Eric; Qian, Haoliang; Bakowski, Malina A; Troemel, Emily R; Liu, Zhaowei

    2016-08-16

    The conventional optical microscope is an inherently two-dimensional (2D) imaging tool. The objective lens, eyepiece and image sensor are all designed to capture light emitted from a 2D 'object plane'. Existing technologies, such as confocal or light sheet fluorescence microscopy have to utilize mechanical scanning, a time-multiplexing process, to capture a 3D image. In this paper, we present a 3D optical microscopy method based upon simultaneously illuminating and detecting multiple focal planes. This is implemented by adding two diffractive optical elements to modify the illumination and detection optics. We demonstrate that the image quality of this technique is comparable to conventional light sheet fluorescent microscopy with the advantage of the simultaneous imaging of multiple axial planes and reduced number of scans required to image the whole sample volume.

  5. Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.

    PubMed

    Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi

    2014-10-20

    We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.

  6. 4D multiple-cathode ultrafast electron microscopy

    PubMed Central

    Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H.

    2014-01-01

    Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging. PMID:25006261

  7. 4D multiple-cathode ultrafast electron microscopy.

    PubMed

    Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H

    2014-07-22

    Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging.

  8. Design and implementation of a contactless multiple hand feature acquisition system

    NASA Astrophysics Data System (ADS)

    Zhao, Qiushi; Bu, Wei; Wu, Xiangqian; Zhang, David

    2012-06-01

    In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.

  9. Enhanced image capture through fusion

    NASA Technical Reports Server (NTRS)

    Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.

    1993-01-01

    Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.

  10. Free Surface Downgoing VSP Multiple Imaging

    NASA Astrophysics Data System (ADS)

    Maula, Fahdi; Dac, Nguyen

    2018-03-01

    The common usage of a vertical seismic profile is to capture the reflection wavefield (upgoing wavefield) so that it can be used for further well tie or other interpretations. Borehole Seismic (VSP) receivers capture the reflection from below the well trajectory, traditionally no seismic image information above trajectory. The non-traditional way of processing the VSP multiple can be used to expand the imaging above the well trajectory. This paper presents the case study of using VSP downgoing multiples for further non-traditional imaging applications. In general, VSP processing, upgoing and downgoing arrivals are separated during processing. The up-going wavefield is used for subsurface illumination, whereas the downgoing wavefield and multiples are normally excluded from the processing. In a situation where the downgoing wavefield passes the reflectors several times (multiple), the downgoing wavefield carries reflection information. Its benefit is that it can be used for seismic tie up to seabed, and possibility for shallow hazards identifications. One of the concepts of downgoing imaging is widely known as mirror-imaging technique. This paper presents a case study from deep water offshore Vietnam. The case study is presented to demonstrate the robustness of the technique, and the limitations encountered during its processing.

  11. Portable LED-induced autofluorescence imager with a probe of L shape for oral cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Wei; Lee, Yu-Cheng; Cheng, Nai-Lun; Yan, Yung-Jhe; Chiang, Hou-Chi; Chiou, Jin-Chern; Mang, Ou-Yang

    2015-08-01

    The difference of spectral distribution between lesions of epithelial cells and normal cells after excited fluorescence is one of methods for the cancer diagnosis. In our previous work, we developed a portable LED Induced autofluorescence (LIAF) imager contained the multiple wavelength of LED excitation light and multiple filters to capture ex-vivo oral tissue autofluorescence images. Our portable system for detection of oral cancer has a probe in front of the lens for fixing the object distance. The shape of the probe is cone, and it is not convenient for doctor to capture the oral image under an appropriate view angle in front of the probe. Therefore, a probe of L shape containing a mirror is proposed for doctors to capture the images with the right angles, and the subjects do not need to open their mouse constrainedly. Besides, a glass plate is placed in probe to prevent the liquid entering in the body, but the light reflected from the glass plate directly causes the light spots inside the images. We set the glass plate in front of LED to avoiding the light spots. When the distance between the glasses plate and the LED model plane is less than the critical value, then we can prevent the light spots caused from the glasses plate. The experiments show that the image captured with the new probe that the glasses plate placed in the back-end of the probe has no light spots inside the image.

  12. Comparison of mosaicking techniques for airborne images from consumer-grade cameras

    USDA-ARS?s Scientific Manuscript database

    Images captured from airborne imaging systems have the advantages of relatively low cost, high spatial resolution, and real/near-real-time availability. Multiple images taken from one or more flight lines could be used to generate a high-resolution mosaic image, which could be useful for diverse rem...

  13. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    PubMed

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  14. Simultaneous multiview capture and fusion improves spatial resolution in wide-field and light-sheet microscopy

    PubMed Central

    Wu, Yicong; Chandris, Panagiotis; Winter, Peter W.; Kim, Edward Y.; Jaumouillé, Valentin; Kumar, Abhishek; Guo, Min; Leung, Jacqueline M.; Smith, Corey; Rey-Suarez, Ivan; Liu, Huafeng; Waterman, Clare M.; Ramamurthi, Kumaran S.; La Riviere, Patrick J.; Shroff, Hari

    2016-01-01

    Most fluorescence microscopes are inefficient, collecting only a small fraction of the emitted light at any instant. Besides wasting valuable signal, this inefficiency also reduces spatial resolution and causes imaging volumes to exhibit significant resolution anisotropy. We describe microscopic and computational techniques that address these problems by simultaneously capturing and subsequently fusing and deconvolving multiple specimen views. Unlike previous methods that serially capture multiple views, our approach improves spatial resolution without introducing any additional illumination dose or compromising temporal resolution relative to conventional imaging. When applying our methods to single-view wide-field or dual-view light-sheet microscopy, we achieve a twofold improvement in volumetric resolution (~235 nm × 235 nm × 340 nm) as demonstrated on a variety of samples including microtubules in Toxoplasma gondii, SpoVM in sporulating Bacillus subtilis, and multiple protein distributions and organelles in eukaryotic cells. In every case, spatial resolution is improved with no drawback by harnessing previously unused fluorescence. PMID:27761486

  15. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  16. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  17. Spacecraft Images Comet Target Jets

    NASA Image and Video Library

    2010-11-04

    NASA Deep Impact spacecraft High- and Medium-Resolution Imagers HRI and MRI captured multiple jets emanating from comet Hartley 2 turning on and off while the spacecraft is 8 million kilometers 5 million miles away from the comet.

  18. Coprates Chasma - False Color

    NASA Image and Video Library

    2014-12-10

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.

  19. Hebes Chasma - False Color

    NASA Image and Video Library

    2014-12-08

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Hebes Chasma.

  20. Terra Sabaea - False Color

    NASA Image and Video Library

    2016-02-01

    The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image captured by NASA 2001 Mars Odyssey spacecraft shows part of the plains of Terra Sabaea.

  1. Melas Chasma - False Color

    NASA Image and Video Library

    2014-12-09

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Melas Chasma.

  2. Coprates Chasma - False Color

    NASA Image and Video Library

    2014-12-11

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.

  3. Craters - False Color

    NASA Image and Video Library

    2016-02-04

    The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image captured by NASA 2001 Mars Odyssey spacecraft shows a group of unnamed craters north of Fournier Crater.

  4. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †

    PubMed Central

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  5. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  6. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  7. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  8. Wide-field Fourier ptychographic microscopy using laser illumination source

    PubMed Central

    Chung, Jaebum; Lu, Hangwen; Ou, Xiaoze; Zhou, Haojiang; Yang, Changhuei

    2016-01-01

    Fourier ptychographic (FP) microscopy is a coherent imaging method that can synthesize an image with a higher bandwidth using multiple low-bandwidth images captured at different spatial frequency regions. The method’s demand for multiple images drives the need for a brighter illumination scheme and a high-frame-rate camera for a faster acquisition. We report the use of a guided laser beam as an illumination source for an FP microscope. It uses a mirror array and a 2-dimensional scanning Galvo mirror system to provide a sample with plane-wave illuminations at diverse incidence angles. The use of a laser presents speckles in the image capturing process due to reflections between glass surfaces in the system. They appear as slowly varying background fluctuations in the final reconstructed image. We are able to mitigate these artifacts by including a phase image obtained by differential phase contrast (DPC) deconvolution in the FP algorithm. We use a 1-Watt laser configured to provide a collimated beam with 150 mW of power and beam diameter of 1 cm to allow for the total capturing time of 0.96 seconds for 96 raw FPM input images in our system, with the camera sensor’s frame rate being the bottleneck for speed. We demonstrate a factor of 4 resolution improvement using a 0.1 NA objective lens over the full camera field-of-view of 2.7 mm by 1.5 mm. PMID:27896016

  9. Recreation of three-dimensional objects in a real-time simulated environment by means of a panoramic single lens stereoscopic image-capturing device

    NASA Astrophysics Data System (ADS)

    Wong, Erwin

    2000-03-01

    Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.

  10. Penrose high-dynamic-range imaging

    NASA Astrophysics Data System (ADS)

    Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian

    2016-05-01

    High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.

  11. Tile-Image Merging and Delivering for Virtual Camera Services on Tiled-Display for Real-Time Remote Collaboration

    NASA Astrophysics Data System (ADS)

    Choe, Giseok; Nang, Jongho

    The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.

  12. Multiple roles of filopodial dynamics in particle capture and phagocytosis and phenotypes of Cdc42 and Myo10 deletion

    PubMed Central

    Horsthemke, Markus; Bachg, Anne C.; Groll, Katharina; Moyzio, Sven; Müther, Barbara; Hemkemeyer, Sandra A.; Wedlich-Söldner, Roland; Sixt, Michael; Tacke, Sebastian; Bähler, Martin; Hanley, Peter J.

    2017-01-01

    Macrophage filopodia, finger-like membrane protrusions, were first implicated in phagocytosis more than 100 years ago, but little is still known about the involvement of these actin-dependent structures in particle clearance. Using spinning disk confocal microscopy to image filopodial dynamics in mouse resident Lifeact-EGFP macrophages, we show that filopodia, or filopodia-like structures, support pathogen clearance by multiple means. Filopodia supported the phagocytic uptake of bacterial (Escherichia coli) particles by (i) capturing along the filopodial shaft and surfing toward the cell body, the most common mode of capture; (ii) capturing via the tip followed by retraction; (iii) combinations of surfing and retraction; or (iv) sweeping actions. In addition, filopodia supported the uptake of zymosan (Saccharomyces cerevisiae) particles by (i) providing fixation, (ii) capturing at the tip and filopodia-guided actin anterograde flow with phagocytic cup formation, and (iii) the rapid growth of new protrusions. To explore the role of filopodia-inducing Cdc42, we generated myeloid-restricted Cdc42 knock-out mice. Cdc42-deficient macrophages exhibited rapid phagocytic cup kinetics, but reduced particle clearance, which could be explained by the marked rounded-up morphology of these cells. Macrophages lacking Myo10, thought to act downstream of Cdc42, had normal morphology, motility, and phagocytic cup formation, but displayed markedly reduced filopodia formation. In conclusion, live-cell imaging revealed multiple mechanisms involving macrophage filopodia in particle capture and engulfment. Cdc42 is not critical for filopodia or phagocytic cup formation, but plays a key role in driving macrophage lamellipodial spreading. PMID:28289096

  13. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  14. Extended Field Laser Confocal Microscopy (EFLCM): combining automated Gigapixel image capture with in silico virtual microscopy.

    PubMed

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-07-16

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.

  15. Multi-channel medical imaging system

    DOEpatents

    Frangioni, John V

    2013-12-31

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in the subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  16. Multi-channel medical imaging system

    DOEpatents

    Frangioni, John V.

    2016-05-03

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remain in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may provide an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide used to capture images. The system may be configured for use in open surgical procedures by providing an operating area that is closed to ambient light. The systems described herein provide two or more diagnostic imaging channels for capture of multiple, concurrent diagnostic images and may be used where a visible light image may be usefully supplemented by two or more images that are independently marked for functional interest.

  17. Computerized image analysis for acetic acid induced intraepithelial lesions

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.

    2008-03-01

    Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.

  18. Nonlinear filtering for character recognition in low quality document images

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2014-09-01

    Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.

  19. 2013 R&D 100 Award: Movie-mode electron microscope captures nanoscale

    ScienceCinema

    Lagrange, Thomas; Reed, Bryan

    2018-01-26

    A new instrument developed by LLNL scientists and engineers, the Movie Mode Dynamic Transmission Electron Microscope (MM-DTEM), captures billionth-of-a-meter-scale images with frame rates more than 100,000 times faster than those of conventional techniques. The work was done in collaboration with a Pleasanton-based company, Integrated Dynamic Electron Solutions (IDES) Inc. Using this revolutionary imaging technique, a range of fundamental and technologically important material and biological processes can be captured in action, in complete billionth-of-a-meter detail, for the first time. The primary application of MM-DTEM is the direct observation of fast processes, including microstructural changes, phase transformations and chemical reactions, that shape real-world performance of nanostructured materials and potentially biological entities. The instrument could prove especially valuable in the direct observation of macromolecular interactions, such as protein-protein binding and host-pathogen interactions. While an earlier version of the technology, Single Shot-DTEM, could capture a single snapshot of a rapid process, MM-DTEM captures a multiframe movie that reveals complex sequences of events in detail. It is the only existing technology that can capture multiple electron microscopy images in the span of a single microsecond.

  20. 2013 R&D 100 Award: Movie-mode electron microscope captures nanoscale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagrange, Thomas; Reed, Bryan

    2014-04-03

    A new instrument developed by LLNL scientists and engineers, the Movie Mode Dynamic Transmission Electron Microscope (MM-DTEM), captures billionth-of-a-meter-scale images with frame rates more than 100,000 times faster than those of conventional techniques. The work was done in collaboration with a Pleasanton-based company, Integrated Dynamic Electron Solutions (IDES) Inc. Using this revolutionary imaging technique, a range of fundamental and technologically important material and biological processes can be captured in action, in complete billionth-of-a-meter detail, for the first time. The primary application of MM-DTEM is the direct observation of fast processes, including microstructural changes, phase transformations and chemical reactions, that shapemore » real-world performance of nanostructured materials and potentially biological entities. The instrument could prove especially valuable in the direct observation of macromolecular interactions, such as protein-protein binding and host-pathogen interactions. While an earlier version of the technology, Single Shot-DTEM, could capture a single snapshot of a rapid process, MM-DTEM captures a multiframe movie that reveals complex sequences of events in detail. It is the only existing technology that can capture multiple electron microscopy images in the span of a single microsecond.« less

  1. A novel method for the photographic recovery of fingermark impressions from ammunition cases using digital imaging.

    PubMed

    Porter, Glenn; Ebeyan, Robert; Crumlish, Charles; Renshaw, Adrian

    2015-03-01

    The photographic preservation of fingermark impression evidence found on ammunition cases remains problematic due to the cylindrical shape of the deposition substrate preventing complete capture of the impression in a single image. A novel method was developed for the photographic recovery of fingermarks from curved surfaces using digital imaging. The process involves the digital construction of a complete impression image made from several different images captured from multiple camera perspectives. Fingermark impressions deposited onto 9-mm and 0.22-caliber brass cartridge cases and a plastic 12-gauge shotgun shell were tested using various image parameters, including digital stitching method, number of images per 360° rotation of shell, image cropping, and overlap. The results suggest that this method may be successfully used to recover fingermark impression evidence from the surfaces of ammunition cases or other similar cylindrical surfaces. © 2014 American Academy of Forensic Sciences.

  2. Multispectral high-resolution hologram generation using orthographic projection images

    NASA Astrophysics Data System (ADS)

    Muniraj, I.; Guo, C.; Sheridan, J. T.

    2016-08-01

    We present a new method of synthesizing a digital hologram of three-dimensional (3D) real-world objects from multiple orthographic projection images (OPI). A high-resolution multiple perspectives of 3D objects (i.e., two dimensional elemental image array) are captured under incoherent white light using synthetic aperture integral imaging (SAII) technique and their OPIs are obtained respectively. The reference beam is then multiplied with the corresponding OPI and integrated to form a Fourier hologram. Eventually, a modified phase retrieval algorithm (GS/HIO) is applied to reconstruct the hologram. The principle is validated experimentally and the results support the feasibility of the proposed method.

  3. Multiple Hypotheses Image Segmentation and Classification With Application to Dietary Assessment

    PubMed Central

    Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2016-01-01

    We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier’s confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback. PMID:25561457

  4. Multiple hypotheses image segmentation and classification with application to dietary assessment.

    PubMed

    Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J; Delp, Edward J

    2015-01-01

    We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier's confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback.

  5. Development of a balloon-borne device for analysis of high-altitude ice and aerosol particulates: Ice Cryo Encapsulator by Balloon (ICE-Ball)

    NASA Astrophysics Data System (ADS)

    Boaggio, K.; Bandamede, M.; Bancroft, L.; Hurler, K.; Magee, N. B.

    2016-12-01

    We report on details of continuing instrument development and deployment of a novel balloon-borne device for capturing and characterizing atmospheric ice and aerosol particles, the Ice Cryo Encapsulator by Balloon (ICE-Ball). The device is designed to capture and preserve cirrus ice particles, maintaining them at cold equilibrium temperatures, so that high-altitude particles can recovered, transferred intact, and then imaged under SEM at an unprecedented resolution (approximately 3 nm maximum resolution). In addition to cirrus ice particles, high altitude aerosol particles are also captured, imaged, and analyzed for geometry, chemical composition, and activity as ice nucleating particles. Prototype versions of ICE-Ball have successfully captured and preserved high altitude ice particles and aerosols, then returned them for recovery and SEM imaging and analysis. New improvements include 1) ability to capture particles from multiple narrowly-defined altitudes on a single payload, 2) high quality measurements of coincident temperature, humidity, and high-resolution video at capture altitude, 3) ability to capture particles during both ascent and descent, 4) better characterization of particle collection volume and collection efficiency, and 5) improved isolation and characterization of capture-cell cryo environment. This presentation provides detailed capability specifications for anyone interested in using measurements, collaborating on continued instrument development, or including this instrument in ongoing or future field campaigns.

  6. A new hue capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies

    NASA Technical Reports Server (NTRS)

    Camci, C.; Kim, K.; Hippensteele, S. A.

    1992-01-01

    A new image processing based color capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies is presented. This method is highly applicable to the surfaces exposed to convective heating in gas turbine engines. It is shown that, in the single-crystal mode, many of the colors appearing on the heat transfer surface correlate strongly with the local temperature. A very accurate quantitative approach using an experimentally determined linear hue vs temperature relation is found to be possible. The new hue-capturing process is discussed in terms of the strength of the light source illuminating the heat transfer surface, the effect of the orientation of the illuminating source with respect to the surface, crystal layer uniformity, and the repeatability of the process. The present method is more advantageous than the multiple filter method because of its ability to generate many isotherms simultaneously from a single-crystal image at a high resolution in a very time-efficient manner.

  7. A multimodal 3D framework for fire characteristics estimation

    NASA Astrophysics Data System (ADS)

    Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.

    2018-02-01

    In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.

  8. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  9. Melas Chasma - False Color

    NASA Image and Video Library

    2017-07-13

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 59750 Latitude: -10.5452 Longitude: 290.307 Instrument: VIS Captured: 2015-06-03 12:33 https://photojournal.jpl.nasa.gov/catalog/PIA21705

  10. Melas Chasma - False Color

    NASA Image and Video Library

    2015-08-21

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 10289 Latitude: -9.9472 Longitude: 285.933 Instrument: VIS Captured: 2004-04-09 12:43 http://photojournal.jpl.nasa.gov/catalog/PIA19756

  11. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs

    NASA Astrophysics Data System (ADS)

    Yahyanejad, Saeed; Rinner, Bernhard

    2015-06-01

    The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.

  12. Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images

    NASA Astrophysics Data System (ADS)

    Kruschwitz, Jennifer D. T.

    Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.

  13. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  14. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  15. Cryo-Scanning Electron Microscopy of Captured Cirrus Ice Particles

    NASA Astrophysics Data System (ADS)

    Magee, N. B.; Boaggio, K.; Bandamede, M.; Bancroft, L.; Hurler, K.

    2016-12-01

    We present the latest collection of high-resolution cryo-scanning electron microscopy images and microanalysis of cirrus ice particles captured by high-altitude balloon (ICE-Ball, see abstracts by K. Boaggio and M. Bandamede). Ice particle images and sublimation-residues are derived from particles captured during approximately 15 balloon flights conducted in Pennsylvania and New Jersey over the past 12 months. Measurements include 3D digital elevation model reconstructions of ice particles, and associated statistical analyses of entire particles and particle sub-facets and surfaces. This 3D analysis reveals that morphologies of most ice particles captured deviate significantly from ideal habits, and display geometric complexity and surface roughness at multiple measureable scales, ranging from 100's nanometers to 100's of microns. The presentation suggests potential a path forward for representing scattering from a realistically complex array of ice particle shapes and surfaces.

  16. Instrumentation for simultaneous kinetic imaging of multiple fluorophores in single living cells

    NASA Astrophysics Data System (ADS)

    Morris, Stephen J.; Beatty, Diane M.; Welling, Larry W.; Wiegmann, Thomas B.

    1991-05-01

    Low-light fluorescence video microscopy has established itself as an excellent method for investigations of cell dynamics. There is a growing interest in resolving multiple images of 'ratio' fluorophores like indo or BCECF or the emission from multiple dyes placed in the same cell system. For rapid kinetic studies, the problems of photodynamic damage and photobleaching on one hand and the need for good spatial and temporal resolution on the other, press the resolution of the instrumentation. Rapid resolution of multiple probes at multiple wavelengths presents a third set of problems of exciting the probes and appropriately imaging the emitted light. The authors have designed a new real-time low-light fluorescence video microscope for capturing intensified images of up to four dyes contained in the same cell system. These can be two dual-emission wavelength 'ratio' dyes or multiple dyes. The optics allow simultaneous excitation of up to four fluorophores and the real-time (30 frames/second) capture of four separate fluorescence emission images. Each emission wavelength is imaged simultaneously by one of four cameras, then digitized and appropriately combined at standard video frame rates to be stored at high resolution on tape or video disk for further off-line correction and analysis. The design has no moving parts in its optical train, which overcomes a number of technical difficulties encountered in filter wheel or mechanical shutter designs for multiple imaging. The instrument can be assembled form off-the-shelf components. Coupled to compatible image processing software utilizing PC-AT computers, it can be realized for relatively low cost. Two examples of simultaneous multi-parameter imaging are presented. Synchronous observations of calcium and pH distribution in kidney epithelial cells, loaded with both indo-1 and SNARF-1TM, show that both are altered in response to ionomycin treatment; however, the kinetics for the two changes are quite different. Intracellular calcium increases rapidly when the bath Ca2+ is raised. The pH remains stable for several seconds, then suddenly collapses. The second example concerns fusion of human red blood cells (RBC) to fibroblasts expressing influenza hemagglutinin. Movement of soluble and membrane-bound dyes follow different kinetics, depending upon the molecular weight of the soluble dye. Furthermore, the swelling of the RBC occurs after the onset of fusion, and therefore cannot provide the driving force.

  17. A Unified Framework for Street-View Panorama Stitching

    PubMed Central

    Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei

    2016-01-01

    In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481

  18. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  19. Optic probe for multiple angle image capture and optional stereo imaging

    DOEpatents

    Malone, Robert M.; Kaufman, Morris I.

    2016-11-29

    A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.

  20. Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases

    PubMed Central

    Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.

    2007-01-01

    The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403

  1. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    PubMed Central

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-01-01

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781

  2. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.

    PubMed

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-02-09

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  3. Terra Cimmeria - False Color

    NASA Image and Video Library

    2016-10-11

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows dust devil tracks (dark blue linear feature) in Terra Cimmeria. Orbit Number: 43463 Latitude: -53.1551 Longitude: 125.069 Instrument: VIS Captured: 2011-10-01 23:55 http://photojournal.jpl.nasa.gov/catalog/PIA21009

  4. Russell Crater - False Color

    NASA Image and Video Library

    2017-06-01

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Russell Crater in Noachis Terra. Orbit Number: 59591 Latitude: -54.471 Longitude: 13.1288 Instrument: VIS Captured: 2015-05-21 10:57 https://photojournal.jpl.nasa.gov/catalog/PIA21674

  5. Jovian 'Twilight Zone'

    NASA Image and Video Library

    2018-03-01

    This image captures the swirling cloud formations around the south pole of Jupiter, looking up toward the equatorial region. NASA's Juno spacecraft took the color-enhanced image during its eleventh close flyby of the gas giant planet on Feb. 7 at 7:11 a.m. PST (10:11 a.m. EST). At the time, the spacecraft was 74,896 miles (120,533 kilometers) from the tops of Jupiter's clouds at 84.9 degrees south latitude. Citizen scientist Gerald Gerald Eichstädt processed this image using data from the JunoCam imager. This image was created by reprocessing raw JunoCam data using trajectory and pointing data from the spacecraft. This image is one in a series of images taken in an experiment to capture the best results for illuminated parts of Jupiter's polar region. To make features more visible in Jupiter's terminator -- the region where day meets night -- the Juno team adjusted JunoCam so that it would perform like a portrait photographer taking multiple photos at different exposures, hoping to capture one image with the intended light balance. For JunoCam to collect enough light to reveal features in Jupiter's dark twilight zone, the much brighter illuminated day-side of Jupiter becomes overexposed with the higher exposure. https://photojournal.jpl.nasa.gov/catalog/PIA21980

  6. Information recovery through image sequence fusion under wavelet transformation

    NASA Astrophysics Data System (ADS)

    He, Qiang

    2010-04-01

    Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.

  7. The Cooking and Pneumonia Study (CAPS) in Malawi: Implementation of Remote Source Data Verification

    PubMed Central

    Weston, William; Smedley, James; Bennett, Andrew; Mortimer, Kevin

    2016-01-01

    Background Source data verification (SDV) is a data monitoring procedure which compares the original records with the Case Report Form (CRF). Traditionally, on-site SDV relies on monitors making multiples visits to study sites requiring extensive resources. The Cooking And Pneumonia Study (CAPS) is a 24- month village-level cluster randomized controlled trial assessing the effectiveness of an advanced cook-stove intervention in preventing pneumonia in children under five in rural Malawi (www.capstudy.org). CAPS used smartphones to capture digital images of the original records on an electronic CRF (eCRF). In the present study, descriptive statistics are used to report the experience of electronic data capture with remote SDV in a challenging research setting in rural Malawi. Methods At three monthly intervals, fieldworkers, who were employed by CAPS, captured pneumonia data from the original records onto the eCRF. Fieldworkers also captured digital images of the original records. Once Internet connectivity was available, the data captured on the eCRF and the digital images of the original records were uploaded to a web-based SDV application. This enabled SDV to be conducted remotely from the UK. We conducted SDV of the pneumonia data (occurrence, severity, and clinical indicators) recorded in the eCRF with the data in the digital images of the original records. Result 664 episodes of pneumonia were recorded after 6 months of follow-up. Of these 664 episodes, 611 (92%) had a finding of pneumonia in the original records. All digital images of the original records were clear and legible. Conclusion Electronic data capture using eCRFs on mobile technology is feasible in rural Malawi. Capturing digital images of the original records in the field allows remote SDV to be conducted efficiently and securely without requiring additional field visits. We recommend these approaches in similar settings, especially those with health endpoints. PMID:27355447

  8. The Cooking and Pneumonia Study (CAPS) in Malawi: Implementation of Remote Source Data Verification.

    PubMed

    Weston, William; Smedley, James; Bennett, Andrew; Mortimer, Kevin

    2016-01-01

    Source data verification (SDV) is a data monitoring procedure which compares the original records with the Case Report Form (CRF). Traditionally, on-site SDV relies on monitors making multiples visits to study sites requiring extensive resources. The Cooking And Pneumonia Study (CAPS) is a 24- month village-level cluster randomized controlled trial assessing the effectiveness of an advanced cook-stove intervention in preventing pneumonia in children under five in rural Malawi (www.capstudy.org). CAPS used smartphones to capture digital images of the original records on an electronic CRF (eCRF). In the present study, descriptive statistics are used to report the experience of electronic data capture with remote SDV in a challenging research setting in rural Malawi. At three monthly intervals, fieldworkers, who were employed by CAPS, captured pneumonia data from the original records onto the eCRF. Fieldworkers also captured digital images of the original records. Once Internet connectivity was available, the data captured on the eCRF and the digital images of the original records were uploaded to a web-based SDV application. This enabled SDV to be conducted remotely from the UK. We conducted SDV of the pneumonia data (occurrence, severity, and clinical indicators) recorded in the eCRF with the data in the digital images of the original records. 664 episodes of pneumonia were recorded after 6 months of follow-up. Of these 664 episodes, 611 (92%) had a finding of pneumonia in the original records. All digital images of the original records were clear and legible. Electronic data capture using eCRFs on mobile technology is feasible in rural Malawi. Capturing digital images of the original records in the field allows remote SDV to be conducted efficiently and securely without requiring additional field visits. We recommend these approaches in similar settings, especially those with health endpoints.

  9. Toward noncooperative iris recognition: a classification approach using multiple signatures.

    PubMed

    Proença, Hugo; Alexandre, Luís A

    2007-04-01

    This paper focuses on noncooperative iris recognition, i.e., the capture of iris images at large distances, under less controlled lighting conditions, and without active participation of the subjects. This increases the probability of capturing very heterogeneous images (regarding focus, contrast, or brightness) and with several noise factors (iris obstructions and reflections). Current iris recognition systems are unable to deal with noisy data and substantially increase their error rates, especially the false rejections, in these conditions. We propose an iris classification method that divides the segmented and normalized iris image into six regions, makes an independent feature extraction and comparison for each region, and combines each of the dissimilarity values through a classification rule. Experiments show a substantial decrease, higher than 40 percent, of the false rejection rates in the recognition of noisy iris images.

  10. Russell Crater Dunes - False Color

    NASA Image and Video Library

    2017-07-07

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the large dune form on the floor of Russell Crater. Orbit Number: 59672 Latitude: -54.337 Longitude: 13.1087 Instrument: VIS Captured: 2015-05-28 02:39 https://photojournal.jpl.nasa.gov/catalog/PIA21701

  11. Nrf2: A Novel Biomarker of Disease Severity and Target for Therapeutic Intervention in Multiple Sclerosis

    DTIC Science & Technology

    2014-10-01

    imaging technique used to capture T cell/APC interaction and infiltration in CNS during the disease course of EAE; and finally 3) characterize the...period, we aim to understand the mechanism of APC/T cell interaction by standardizing the available mouse model and imaging techniques in our lab...resulted in the development of new triterpenoids, mouse imaging techniques and biochemistry and chemical library construction. For example, work

  12. Guatemala Volcanic Eruption Captured in NASA Spacecraft Image

    NASA Image and Video Library

    2015-02-19

    Guatemala's Fuego volcano continued its frequent moderate eruptions in early February 2015. Pyroclastic flows from the eruptions descended multiple drainages, and the eruptions sent ash plumes spewing over Guatemala City 22 miles (35 kilometers) away, and forced closure of the international airport. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard NASA's Terra spacecraft captured a new image of the region on February 17. Fuego is on the left side of the image. The thermal infrared inset image shows the summit crater activity (white equals hot), and remnant heat in the flows on the flank. Other active volcanoes shown in the image are Acatenango close by to the north, Volcano de Agua in the middle of the image, and Pacaya volcano to the east. The image covers an area of 19 by 31 miles (30 by 49.5 kilometers), and is located at 14.5 degrees north, 90.9 degrees west. http://photojournal.jpl.nasa.gov/catalog/PIA19297

  13. Parallel-multiplexed excitation light-sheet microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xu, Dongli; Zhou, Weibin; Peng, Leilei

    2017-02-01

    Laser scanning light-sheet imaging allows fast 3D image of live samples with minimal bleach and photo-toxicity. Existing light-sheet techniques have very limited capability in multi-label imaging. Hyper-spectral imaging is needed to unmix commonly used fluorescent proteins with large spectral overlaps. However, the challenge is how to perform hyper-spectral imaging without sacrificing the image speed, so that dynamic and complex events can be captured live. We report wavelength-encoded structured illumination light sheet imaging (λ-SIM light-sheet), a novel light-sheet technique that is capable of parallel multiplexing in multiple excitation-emission spectral channels. λ-SIM light-sheet captures images of all possible excitation-emission channels in true parallel. It does not require compromising the imaging speed and is capable of distinguish labels by both excitation and emission spectral properties, which facilitates unmixing fluorescent labels with overlapping spectral peaks and will allow more labels being used together. We build a hyper-spectral light-sheet microscope that combined λ-SIM with an extended field of view through Bessel beam illumination. The system has a 250-micron-wide field of view and confocal level resolution. The microscope, equipped with multiple laser lines and an unlimited number of spectral channels, can potentially image up to 6 commonly used fluorescent proteins from blue to red. Results from in vivo imaging of live zebrafish embryos expressing various genetic markers and sensors will be shown. Hyper-spectral images from λ-SIM light-sheet will allow multiplexed and dynamic functional imaging in live tissue and animals.

  14. Melas Chasma - False Color

    NASA Image and Video Library

    2015-02-27

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Melas Chasma. Orbit Number: 4622 Latitude: -12.797 Longitude: 288.629 Instrument: VIS Captured: 2002-12-30 00:28 http://photojournal.jpl.nasa.gov/catalog/PIA19218

  15. Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.

    PubMed

    Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich

    2017-08-01

    Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.

  16. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    PubMed Central

    Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248

  17. Identifying and Overcoming Obstacles to Point-of-Care Data Collection for Eye Care Professionals

    PubMed Central

    Lobach, David F.; Silvey, Garry M.; Macri, Jennifer M.; Hunt, Megan; Kacmaz, Roje O.; Lee, Paul P.

    2005-01-01

    Supporting data entry by clinicians is considered one of the greatest challenges in implementing electronic health records. In this paper we describe a formative evaluation study using three different methodologies through which we identified obstacles to point-of-care data entry for eye care and then used the formative process to develop and test solutions to overcome these obstacles. The greatest obstacles were supporting free text annotation of clinical observations and accommodating the creation of detailed diagrams in multiple colors. To support free text entry, we arrived at an approach that captures an image of a free text note and associates this image with related data elements in an encounter note. The detailed diagrams included a color pallet that allowed changing pen color with a single stroke and also captured the diagrams as an image associated with related data elements. During observed sessions with simulated patients, these approaches satisfied the clinicians’ documentation needs by capturing the full range of clinical complexity that arises in practice. PMID:16779083

  18. Melas Chasma - False Color

    NASA Image and Video Library

    2015-10-08

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the floor of Melas Chasma. The dark blue region in this false color image is sand dunes. Orbit Number: 12061 Latitude: -12.2215 Longitude: 289.105 Instrument: VIS Captured: 2004-09-02 10:11 http://photojournal.jpl.nasa.gov/catalog/PIA19793

  19. A 100 Mfps image sensor for biological applications

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo

    2018-02-01

    Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.

  20. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  1. Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035

    DTIC Science & Technology

    2010-04-01

    County Sheriff’s Department, use certain measurements such as the distance between eyes, the length of the nose, or the shape of the ears. 8 However...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software...stream of data. High resolution video systems, such as those described below will be able to capture orders of magnitude more data in one video frame

  2. Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera

    NASA Astrophysics Data System (ADS)

    Cruz Perez, Carlos; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor

    2015-09-01

    Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.

  3. Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.

    PubMed

    Perez, Carlos Cruz; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor

    2015-09-01

    Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.

  4. N-Way FRET Microscopy of Multiple Protein-Protein Interactions in Live Cells

    PubMed Central

    Hoppe, Adam D.; Scott, Brandon L.; Welliver, Timothy P.; Straight, Samuel W.; Swanson, Joel A.

    2013-01-01

    Fluorescence Resonance Energy Transfer (FRET) microscopy has emerged as a powerful tool to visualize nanoscale protein-protein interactions while capturing their microscale organization and millisecond dynamics. Recently, FRET microscopy was extended to imaging of multiple donor-acceptor pairs, thereby enabling visualization of multiple biochemical events within a single living cell. These methods require numerous equations that must be defined on a case-by-case basis. Here, we present a universal multispectral microscopy method (N-Way FRET) to enable quantitative imaging for any number of interacting and non-interacting FRET pairs. This approach redefines linear unmixing to incorporate the excitation and emission couplings created by FRET, which cannot be accounted for in conventional linear unmixing. Experiments on a three-fluorophore system using blue, yellow and red fluorescent proteins validate the method in living cells. In addition, we propose a simple linear algebra scheme for error propagation from input data to estimate the uncertainty in the computed FRET images. We demonstrate the strength of this approach by monitoring the oligomerization of three FP-tagged HIV Gag proteins whose tight association in the viral capsid is readily observed. Replacement of one FP-Gag molecule with a lipid raft-targeted FP allowed direct observation of Gag oligomerization with no association between FP-Gag and raft-targeted FP. The N-Way FRET method provides a new toolbox for capturing multiple molecular processes with high spatial and temporal resolution in living cells. PMID:23762252

  5. High Dynamic Range Imaging Using Multiple Exposures

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei

    2017-06-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.

  6. Method for determining and displaying the spacial distribution of a spectral pattern of received light

    DOEpatents

    Bennett, C.L.

    1996-07-23

    An imaging Fourier transform spectrometer is described having a Fourier transform infrared spectrometer providing a series of images to a focal plane array camera. The focal plane array camera is clocked to a multiple of zero crossing occurrences as caused by a moving mirror of the Fourier transform infrared spectrometer and as detected by a laser detector such that the frame capture rate of the focal plane array camera corresponds to a multiple of the zero crossing rate of the Fourier transform infrared spectrometer. The images are transmitted to a computer for processing such that representations of the images as viewed in the light of an arbitrary spectral ``fingerprint`` pattern can be displayed on a monitor or otherwise stored and manipulated by the computer. 2 figs.

  7. Medical photography: current technology, evolving issues and legal perspectives.

    PubMed

    Harting, M T; DeWees, J M; Vela, K M; Khirallah, R T

    2015-04-01

    Medical photographic image capture and data management has undergone a rapid and compelling change in complexity over the last 20 years. This is because of multiple factors, including significant advances in ease of photograph capture, alongside an evolution of mechanisms of data portability/dissemination, combined with governmental focus on health information privacy. Literature to guide medical, legal, governmental and business professionals when dealing with issues related to medical photography is virtually nonexistent. Herein, we will address the breadth of uses of medical photography, device properties/specific devices utilised for image capture, methods of data transfer and dissemination and patient perceptions and attitudes regarding photography in a medical setting. In addition, we will address the legal implications, including legal precedent, copyright and privacy law, informed consent, protected health information and the Health Insurance Portability and Accountability Act (HIPAA), as they pertain to medical photography. © 2015 John Wiley & Sons Ltd.

  8. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  9. Applicability of common measures in multifocus image fusion comparison

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek

    2017-11-01

    Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.

  10. A smartphone-based chip-scale microscope using ambient illumination.

    PubMed

    Lee, Seung Ah; Yang, Changhuei

    2014-08-21

    Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone's camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the image resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction are performed on the device using a custom-built Android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.

  11. A smartphone-based chip-scale microscope using ambient illumination

    PubMed Central

    Lee, Seung Ah; Yang, Changhuei

    2014-01-01

    Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the imaging resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system. PMID:24964209

  12. Gale Crater - False Color

    NASA Image and Video Library

    2017-02-15

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Gale Crater. Basaltic sands are dark blue in this type of false color combination. The Curiosity Rover is located in another portion of Gale Crater, far southwest of this image. Orbit Number: 51803 Latitude: -4.39948 Longitude: 138.116 Instrument: VIS Captured: 2013-08-18 09:04 http://photojournal.jpl.nasa.gov/catalog/PIA21312

  13. Remote sensing image segmentation using local sparse structure constrained latent low rank representation

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping

    2016-09-01

    Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  14. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  15. Wind Etching

    NASA Image and Video Library

    2016-08-09

    Today's VIS image is located in a region that has been heavily modified by wind action. The narrow ridge/valley system seen in this image are a feature called yardangs. Yardangs form when unidirectional winds blow across poorly cemented materials. Multiple yardang directions can indicate changes in regional wind regimes. Orbit Number: 64188 Latitude: -0.629314 Longitude: 206.572 Instrument: VIS Captured: 2016-06-03 01:20 http://photojournal.jpl.nasa.gov/catalog/PIA20799

  16. High-Speed Monitoring of Multiple Grid-Connected Photovoltaic Array Configurations and Supplementary Weather Station.

    PubMed

    Boyd, Matthew T

    2017-06-01

    Three grid-connected monocrystalline silicon photovoltaic arrays have been instrumented with research-grade sensors on the Gaithersburg, MD campus of the National Institute of Standards and Technology (NIST). These arrays range from 73 kW to 271 kW and have different tilts, orientations, and configurations. Irradiance, temperature, wind, and electrical measurements at the arrays are recorded, and images are taken of the arrays to monitor shading and capture any anomalies. A weather station has also been constructed that includes research-grade instrumentation to measure all standard meteorological quantities plus additional solar irradiance spectral bands, full spectrum curves, and directional components using multiple irradiance sensor technologies. Reference photovoltaic (PV) modules are also monitored to provide comprehensive baseline measurements for the PV arrays. Images of the whole sky are captured, along with images of the instrumentation and reference modules to document any obstructions or anomalies. Nearly, all measurements at the arrays and weather station are sampled and saved every 1s, with monitoring having started on Aug. 1, 2014. This report describes the instrumentation approach used to monitor the performance of these photovoltaic systems, measure the meteorological quantities, and acquire the images for use in PV performance and weather monitoring and computer model validation.

  17. High-Speed Monitoring of Multiple Grid-Connected Photovoltaic Array Configurations and Supplementary Weather Station

    PubMed Central

    Boyd, Matthew T.

    2017-01-01

    Three grid-connected monocrystalline silicon photovoltaic arrays have been instrumented with research-grade sensors on the Gaithersburg, MD campus of the National Institute of Standards and Technology (NIST). These arrays range from 73 kW to 271 kW and have different tilts, orientations, and configurations. Irradiance, temperature, wind, and electrical measurements at the arrays are recorded, and images are taken of the arrays to monitor shading and capture any anomalies. A weather station has also been constructed that includes research-grade instrumentation to measure all standard meteorological quantities plus additional solar irradiance spectral bands, full spectrum curves, and directional components using multiple irradiance sensor technologies. Reference photovoltaic (PV) modules are also monitored to provide comprehensive baseline measurements for the PV arrays. Images of the whole sky are captured, along with images of the instrumentation and reference modules to document any obstructions or anomalies. Nearly, all measurements at the arrays and weather station are sampled and saved every 1s, with monitoring having started on Aug. 1, 2014. This report describes the instrumentation approach used to monitor the performance of these photovoltaic systems, measure the meteorological quantities, and acquire the images for use in PV performance and weather monitoring and computer model validation. PMID:28670044

  18. Schedule Optimization of Imaging Missions for Multiple Satellites and Ground Stations Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee

    2018-04-01

    In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.

  19. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  20. SMART USE OF COMPUTER-AIDED SPERM ANALYSIS (CASA) TO CHARACTERIZE SPERM MOTION

    EPA Science Inventory

    Computer-aided sperm analysis (CASA) has evolved over the past fifteen years to provide an objective, practical means of measuring and characterizing the velocity and parttern of sperm motion. CASA instruments use video frame-grabber boards to capture multiple images of spermato...

  1. Workflow Challenges of Enterprise Imaging: HIMSS-SIIM Collaborative White Paper.

    PubMed

    Towbin, Alexander J; Roth, Christopher J; Bronkalla, Mark; Cram, Dawn

    2016-10-01

    With the advent of digital cameras, there has been an explosion in the number of medical specialties using images to diagnose or document disease and guide interventions. In many specialties, these images are not added to the patient's electronic medical record and are not distributed so that other providers caring for the patient can view them. As hospitals begin to develop enterprise imaging strategies, they have found that there are multiple challenges preventing the implementation of systems to manage image capture, image upload, and image management. This HIMSS-SIIM white paper will describe the key workflow challenges related to enterprise imaging and offer suggestions for potential solutions to these challenges.

  2. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially.

    PubMed

    Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko

    2016-01-01

    Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.

  3. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially

    PubMed Central

    Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko

    2016-01-01

    Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080

  4. Electronic data capture and DICOM data management in multi-center clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Page, Charles-E.; Deserno, Thomas M.

    2016-03-01

    Providing eligibility, efficacy and security evaluation by quantitative and qualitative disease findings, medical imaging has become increasingly important in clinical trials. Here, subject's data is today captured in electronic case reports forms (eCRFs), which are offered by electronic data capture (EDC) systems. However, integration of subject's medical image data into eCRFs is insufficiently supported. Neither integration of subject's digital imaging and communications in medicine (DICOM) data, nor communication with picture archiving and communication systems (PACS), is possible. This aggravates the workflow of the study personnel, in special regarding studies with distributed data capture in multiple sites. Hence, in this work, a system architecture is presented, which connects an EDC system, a PACS and a DICOM viewer via the web access to DICOM objects (WADO) protocol. The architecture is implemented using the open source tools OpenClinica, DCM4CHEE and Weasis. The eCRF forms the primary endpoint for the study personnel, where subject's image data is stored and retrieved. Background communication with the PACS is completely hidden for the users. Data privacy and consistency is ensured by automatic de-identification and re-labelling of DICOM data with context information (e.g. study and subject identifiers), respectively. The system is exemplarily demonstrated in a clinical trial, where computer tomography (CT) data is de-centrally captured from the subjects and centrally read by a chief radiologists to decide on inclusion of the subjects in the trial. Errors, latency and costs in the EDC workflow are reduced, while, a research database is implicitly built up in the background.

  5. Radius of curvature measurement of spherical smooth surfaces by multiple-beam interferometry in reflection

    NASA Astrophysics Data System (ADS)

    Abdelsalam, D. G.; Shaalan, M. S.; Eloker, M. M.; Kim, Daesuk

    2010-06-01

    In this paper a method is presented to accurately measure the radius of curvature of different types of curved surfaces of different radii of curvatures of 38 000,18 000 and 8000 mm using multiple-beam interference fringes in reflection. The images captured by the digital detector were corrected by flat fielding method. The corrected images were analyzed and the form of the surfaces was obtained. A 3D profile for the three types of surfaces was obtained using Zernike polynomial fitting. Some sources of uncertainty in measurement were calculated by means of ray tracing simulations and the uncertainty budget was estimated within λ/40.

  6. Particle image velocimetry based on wavelength division multiplexing

    NASA Astrophysics Data System (ADS)

    Tang, Chunxiao; Li, Enbang; Li, Hongqiang

    2018-01-01

    This paper introduces a technical approach of wavelength division multiplexing (WDM) based particle image velocimetry (PIV). It is designed to measure transient flows with different scales of velocity by capturing multiple particle images in one exposure. These images are separated by different wavelengths, and thus the pulse separation time is not influenced by the frame rate of the camera. A triple-pulsed PIV system has been created in order to prove the feasibility of WDM-PIV. This is demonstrated in a sieve plate extraction column model by simultaneously measuring the fast flow in the downcomer and the slow vortices inside the plates. A simple displacement/velocity field combination method has also been developed. The constraints imposed by WDM-PIV are limited wavelength choices of available light sources and cameras. The usage of WDM technique represents a feasible way to realize multiple-pulsed PIV.

  7. Gold patterned biochips for on-chip immuno-MALDI-TOF MS: SPR imaging coupled multi-protein MS analysis.

    PubMed

    Kim, Young Eun; Yi, So Yeon; Lee, Chang-Soo; Jung, Yongwon; Chung, Bong Hyun

    2012-01-21

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) analysis of immuno-captured target protein efficiently complements conventional immunoassays by offering rich molecular information such as protein isoforms or modifications. Direct immobilization of antibodies on MALDI solid support enables both target enrichment and MS analysis on the same plate, allowing simplified and potentially multiplexing protein MS analysis. Reliable on-chip immuno-MALDI-TOF MS for multiple biomarkers requires successful adaptation of antibody array biochips, which also must accommodate consistent reaction conditions on antibody arrays during immuno-capture and MS analysis. Here we developed a facile fabrication process of versatile antibody array biochips for reliable on-chip MALDI-TOF-MS analysis of multiple immuno-captured proteins. Hydrophilic gold arrays surrounded by super-hydrophobic surfaces were formed on a gold patterned biochip via spontaneous chemical or protein layer deposition. From antibody immobilization to MALDI matrix treatment, this hydrophilic/phobic pattern allowed highly consistent surface reactions on each gold spot. Various antibodies were immobilized on these gold spots both by covalent coupling or protein G binding. Four different protein markers were successfully analyzed on the present immuno-MALDI biochip from complex protein mixtures including serum samples. Tryptic digests of captured PSA protein were also effectively detected by on-chip MALDI-TOF-MS. Moreover, the present MALDI biochip can be directly applied to the SPR imaging system, by which antibody and subsequent antigen immobilization were successfully monitored.

  8. Dual tracer imaging of SPECT and PET probes in living mice using a sequential protocol

    PubMed Central

    Chapman, Sarah E; Diener, Justin M; Sasser, Todd A; Correcher, Carlos; González, Antonio J; Avermaete, Tony Van; Leevy, W Matthew

    2012-01-01

    Over the past 20 years, multimodal imaging strategies have motivated the fusion of Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT) scans with an X-ray computed tomography (CT) image to provide anatomical information, as well as a framework with which molecular and functional images may be co-registered. Recently, pre-clinical nuclear imaging technology has evolved to capture multiple SPECT or multiple PET tracers to further enhance the information content gathered within an imaging experiment. However, the use of SPECT and PET probes together, in the same animal, has remained a challenge. Here we describe a straightforward method using an integrated trimodal imaging system and a sequential dosing/acquisition protocol to achieve dual tracer imaging with 99mTc and 18F isotopes, along with anatomical CT, on an individual specimen. Dosing and imaging is completed so that minimal animal manipulations are required, full trimodal fusion is conserved, and tracer crosstalk including down-scatter of the PET tracer in SPECT mode is avoided. This technique will enhance the ability of preclinical researchers to detect multiple disease targets and perform functional, molecular, and anatomical imaging on individual specimens to increase the information content gathered within longitudinal in vivo studies. PMID:23145357

  9. A portable array biosensor for food safety

    NASA Astrophysics Data System (ADS)

    Golden, Joel P.; Ngundi, Miriam M.; Shriver-Lake, Lisa C.; Taitt, Chris R.; Ligler, Frances S.

    2004-11-01

    An array biosensor developed for simultaneous analysis of multiple samples has been utilized to develop assays for toxins and pathogens in a variety of foods. The biochemical component of the multi-analyte biosensor consists of a patterned array of biological recognition elements immobilized on the surface of a planar waveguide. A fluorescence assay is performed on the patterned surface, yielding an array of fluorescent spots, the locations of which are used to identify what analyte is present. Signal transduction is accomplished by means of a diode laser for fluorescence excitation, optical filters and a CCD camera for image capture. A laptop computer controls the miniaturized fluidics system and image capture. Results for four mycotoxin competition assays in buffer and food samples are presented.

  10. Characterizing Articulation in Apraxic Speech Using Real-Time Magnetic Resonance Imaging.

    PubMed

    Hagedorn, Christina; Proctor, Michael; Goldstein, Louis; Wilson, Stephen M; Miller, Bruce; Gorno-Tempini, Maria Luisa; Narayanan, Shrikanth S

    2017-04-14

    Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors. Articulatory data were analyzed, and speech errors were detected using time series reflecting articulatory activity in regions of interest. Real-time MRI captured two types of apraxic gestural intrusion errors in a word pair repetition task. Gestural intrusion errors in nonrepetitive speech, multiple silent initiation gestures at the onset of speech, and covert (unphonated) articulation of entire monosyllabic words were also captured. Real-time MRI and accompanying analytical methods capture and quantify many features of apraxic speech that have been previously observed using other modalities while offering high spatial resolution. This patient's apraxia of speech affected the ability to select only the appropriate vocal tract gestures for a target utterance, suppressing others, and to coordinate them in time.

  11. Metalloporphyrins and their uses as imageable tumor-targeting agents for radiation therapy

    DOEpatents

    Miura, Michiko; Slatkin, Daniel N.

    2003-05-20

    The present invention covers halogenated derivatives of boronated porphyrins containing multiple carborane cages having the formula ##STR1## which selectively accumulate in neoplastic tissue within the irradiation volume and thus can be used in cancer therapies including, but not limited to, boron neutron- capture therapy and photodynamic therapy. The present invention also covers methods for using these halogenated derivatives of boronated porphyrins in tumor imaging and cancer treatment.

  12. Use of novel metalloporphyrins as imageable tumor-targeting agents for radiation therapy

    DOEpatents

    Miura, Michiko; Slatkin, Daniel N.

    2005-10-04

    The present invention covers halogenated derivatives of boronated phorphyrins containing multiple carborane cages having the formula ##STR1## which selectively accumulate in neoplastic tissue within the irradiation volume and thus can be used in cancer therapies including, but not limited to, boron neutron-capture therapy and photodynamic therapy. The present invention also covers methods for using these halogenated derivatives of boronated porphyrins in tumor imaging and cancer treatment.

  13. Sediment Profile Imagery as a Toll to Assist Benthic Assessment and Benthic Habitat Mapping

    EPA Science Inventory

    The U.S. EPA Atlantic Ecology Division and the Southern California Coastal Water Research Project (SCCWRP) collaborated in 2008 to explore the use of sediment profile imagery as a tool to assist environmental management, capturing multiple images at each of over 100 stations at a...

  14. Quantitative Multispectral Analysis Of Discrete Subcellular Particles By Digital Imaging Fluorescence Microscopy (DIFM)

    NASA Astrophysics Data System (ADS)

    Dorey, C. K.; Ebenstein, David B.

    1988-10-01

    Subcellular localization of multiple biochemical markers is readily achieved through their characteristic autofluorescence or through use of appropriately labelled antibodies. Recent development of specific probes has permitted elegant studies in calcium and pH in living cells. However, each of these methods measured fluorescence at one wavelength; precise quantitation of multiple fluorophores at individual sites within a cell has not been possible. Using DIFM, we have achieved spectral analysis of discrete subcellular particles 1-2 gm in diameter. The fluorescence emission is broken into narrow bands by an interference monochromator and visualized through the combined use of a silicon intensified target (SIT) camera, a microcomputer based framegrabber with 8 bit resolution, and a color video monitor. Image acquisition, processing, analysis and display are under software control. The digitized image can be corrected for the spectral distortions induced by the wavelength dependent sensitivity of the camera, and the displayed image can be enhanced or presented in pseudocolor to facilitate discrimination of variation in pixel intensity of individual particles. For rapid comparison of the fluorophore composition of granules, a ratio image is produced by dividing the image captured at one wavelength by that captured at another. In the resultant ratio image, a granule which has a fluorophore composition different from the majority is selectively colored. This powerful system has been utilized to obtain spectra of endogenous autofluorescent compounds in discrete cellular organelles of human retinal pigment epithelium, and to measure immunohistochemically labelled components of the extracellular matrix associated with the human optic nerve.

  15. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited].

    PubMed

    Xiao, Xiao; Javidi, Bahram; Martinez-Corral, Manuel; Stern, Adrian

    2013-02-01

    Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.

  16. Active confocal imaging for visual prostheses

    PubMed Central

    Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli

    2014-01-01

    There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710

  17. The Edge of Jupiter

    NASA Image and Video Library

    2017-04-19

    This enhanced color Jupiter image, taken by the JunoCam imager on NASA's Juno spacecraft, showcases several interesting features on the apparent edge (limb) of the planet. Prior to Juno's fifth flyby over Jupiter's mysterious cloud tops, members of the public voted on which targets JunoCam should image. This picture captures not only a fascinating variety of textures in Jupiter's atmosphere, it also features three specific points of interest: "String of Pearls," "Between the Pearls," and "An Interesting Band Point." Also visible is what's known as the STB Spectre, a feature in Jupiter's South Temperate Belt where multiple atmospheric conditions appear to collide. JunoCam images of Jupiter sometimes appear to have an odd shape. This is because the Juno spacecraft is so close to Jupiter that it cannot capture the entire illuminated area in one image -- the sides get cut off. Juno acquired this image on March 27, 2017, at 2:12 a.m. PDT (5:12 a.m. EDT), as the spacecraft performed a close flyby of Jupiter. When the image was taken, the spacecraft was about 12,400 miles (20,000 kilometers) from the planet. This enhanced color image was created by citizen scientist Bjorn Jonsson. https://photojournal.jpl.nasa.gov/catalog/PIA21389

  18. Virtual view image synthesis for eye-contact in TV conversation system

    NASA Astrophysics Data System (ADS)

    Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae

    2010-02-01

    Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.

  19. Monitoring the distribution of prompt gamma rays in boron neutron capture therapy using a multiple-scattering Compton camera: A Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Lee, Taewoong; Lee, Hyounggun; Lee, Wonho

    2015-10-01

    This study evaluated the use of Compton imaging technology to monitor prompt gamma rays emitted by 10B in boron neutron capture therapy (BNCT) applied to a computerized human phantom. The Monte Carlo method, including particle-tracking techniques, was used for simulation. The distribution of prompt gamma rays emitted by the phantom during irradiation with neutron beams is closely associated with the distribution of the boron in the phantom. Maximum likelihood expectation maximization (MLEM) method was applied to the information obtained from the detected prompt gamma rays to reconstruct the distribution of the tumor including the boron uptake regions (BURs). The reconstructed Compton images of the prompt gamma rays were combined with the cross-sectional images of the human phantom. Quantitative analysis of the intensity curves showed that all combined images matched the predetermined conditions of the simulation. The tumors including the BURs were distinguishable if they were more than 2 cm apart.

  20. A pathologist-designed imaging system for anatomic pathology signout, teaching, and research.

    PubMed

    Schubert, E; Gross, W; Siderits, R H; Deckenbaugh, L; He, F; Becich, M J

    1994-11-01

    Pathology images are derived from gross surgical specimens, light microscopy, immunofluorescence, electron microscopy, molecular diagnostic gels, flow cytometry, image analysis data, and clinical laboratory data in graphic form. We have implemented a network of desktop personal computers (PCs) that allow us to easily capture, store, and retrieve gross and microscopic, anatomic, and research pathology images. System architecture involves multiple image acquisition and retrieval sites and a central file server for storage. The digitized images are conveyed via a local area network to and from image capture or display stations. Acquisition sites consist of a high-resolution camera connected to a frame grabber card in a 486-type personal computer, equipped with 16 MB (Table 1) RAM, a 1.05-gigabyte hard drive, and a 32-bit ethernet card for access to our anatomic pathology reporting system. We have designed a push-button workstation for acquiring and indexing images that does not significantly interfere with surgical pathology sign-out. Advantages of the system include the following: (1) Improving patient care: the availability of gross images at time of microscopic sign-out, verification of recurrence of malignancy from archived images, monitoring of bone marrow engraftment and immunosuppressive intervention after bone marrow/solid organ transplantation on repeat biopsies, and ability to seek instantaneous consultation with any pathologist on the network; (2) enhancing the teaching environment: building a digital surgical pathology atlas, improving the availability of images for conference support, and sharing cases across the network; (3) enhancing research: case study compilation, metastudy analysis, and availability of digitized images for quantitative analysis and permanent/reusable image records for archival study; and (4) other practical and economic considerations: storing case requisition images and hand-drawn diagrams deters the spread of gross room contaminants and results in considerable cost savings in photographic media for conferences, improved quality assurance by porting control stains across the network, and a multiplicity of other advantages that enhance image and information management in pathology.

  1. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  2. Fibered fluorescence microscopy (FFM) of intra epidermal nerve fibers--translational marker for peripheral neuropathies in preclinical research: processing and analysis of the data

    NASA Astrophysics Data System (ADS)

    Cornelissen, Frans; De Backer, Steve; Lemeire, Jan; Torfs, Berf; Nuydens, Rony; Meert, Theo; Schelkens, Peter; Scheunders, Paul

    2008-08-01

    Peripheral neuropathy can be caused by diabetes or AIDS or be a side-effect of chemotherapy. Fibered Fluorescence Microscopy (FFM) is a recently developed imaging modality using a fiber optic probe connected to a laser scanning unit. It allows for in-vivo scanning of small animal subjects by moving the probe along the tissue surface. In preclinical research, FFM enables non-invasive, longitudinal in vivo assessment of intra epidermal nerve fibre density in various models for peripheral neuropathies. By moving the probe, FFM allows visualization of larger surfaces, since, during the movement, images are continuously captured, allowing to acquire an area larger then the field of view of the probe. For analysis purposes, we need to obtain a single static image from the multiple overlapping frames. We introduce a mosaicing procedure for this kind of video sequence. Construction of mosaic images with sub-pixel alignment is indispensable and must be integrated into a global consistent image aligning. An additional motivation for the mosaicing is the use of overlapping redundant information to improve the signal to noise ratio of the acquisition, because the individual frames tend to have both high noise levels and intensity inhomogeneities. For longitudinal analysis, mosaics captured at different times must be aligned as well. For alignment, global correlation-based matching is compared with interest point matching. Use of algorithms working on multiple CPU's (parallel processor/cluster/grid) is imperative for use in a screening model.

  3. Method for determining and displaying the spacial distribution of a spectral pattern of received light

    DOEpatents

    Bennett, Charles L.

    1996-01-01

    An imaging Fourier transform spectrometer (10, 210) having a Fourier transform infrared spectrometer (12) providing a series of images (40) to a focal plane array camera (38). The focal plane array camera (38) is clocked to a multiple of zero crossing occurrences as caused by a moving mirror (18) of the Fourier transform infrared spectrometer (12) and as detected by a laser detector (50) such that the frame capture rate of the focal plane array camera (38) corresponds to a multiple of the zero crossing rate of the Fourier transform infrared spectrometer (12). The images (40) are transmitted to a computer (45) for processing such that representations of the images (40) as viewed in the light of an arbitrary spectral "fingerprint" pattern can be displayed on a monitor (60) or otherwise stored and manipulated by the computer (45).

  4. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  5. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  6. The Bagnold Dunes in Southern Summer: Active Sediment Transport on Mars Observed by the Curiosity rover

    NASA Astrophysics Data System (ADS)

    Baker, M. M.; Lapotre, M. G. A.; Bridges, N. T.; Minitti, M. E.; Newman, C. E.; Ehlmann, B. L.; Vasavada, A. R.; Edgett, K. S.; Lewis, K. W.

    2017-12-01

    Since its landing at Gale crater five years ago, the Curiosity rover has provided us with unparalleled data to study active surface processes on Mars. Repeat imaging campaigns (i.e. "change-detection campaigns") conducted with the rover's cameras have allowed us to study Martian atmosphere-surface interactions and characterize wind-driven sediment transport from ground-truth observations. Utilizing the rover's periodic stops to image identical patches of ground over multiple sols, these change-detection campaigns have revealed sediment motion over a wide range of grain sizes. These results have been corroborated in images taken by the rover's hand lens imager (MAHLI), which have captured sand transport occurring on the scale of minutes. Of particular interest are images collected during Curiosity's traverse across the Bagnold Dune Field, the first dune field observed to be active in situ on another planet. Curiosity carried out the first phase of the Bagnold Dunes campaign (between Ls 72º and 109º) along the northern edge of the dune field at the base of Aeolis Mons, where change-detection images showed very limited sediment motion. More recently, a second phase of the campaign was conducted along the southern edge of the dune field between Ls 312º to 345º; here, images captured extensive wind-driven sand motion. Observations from multiple cameras show ripples migrating to the southwest, in agreement with predicted net transport within the dune field. Together with change-detection observations conducted outside of the dune field, the data show that ubiquitous Martian landscapes are seasonally active within Gale crater, with the bulk of the sediment flux occurring during southern summer.

  7. Optical flow estimation on image sequences with differently exposed frames

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  8. LED induced autofluorescence (LIAF) imager with eight multi-filters for oral cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Wei; Cheng, Nai-Lun; Tsai, Ming-Hsui; Chiou, Jin-Chern; Mang, Ou-Yang

    2016-03-01

    Oral cancer is one of the serious and growing problem in many developing and developed countries. The simple oral visual screening by clinician can reduce 37,000 oral cancer deaths annually worldwide. However, the conventional oral examination with the visual inspection and the palpation of oral lesions is not an objective and reliable approach for oral cancer diagnosis, and it may cause the delayed hospital treatment for the patients of oral cancer or leads to the oral cancer out of control in the late stage. Therefore, a device for oral cancer detection are developed for early diagnosis and treatment. A portable LED Induced autofluorescence (LIAF) imager is developed by our group. It contained the multiple wavelength of LED excitation light and the rotary filter ring of eight channels to capture ex-vivo oral tissue autofluorescence images. The advantages of LIAF imager compared to other devices for oral cancer diagnosis are that LIAF imager has a probe of L shape for fixing the object distance, protecting the effect of ambient light, and observing the blind spot in the deep port between the gumsgingiva and the lining of the mouth. Besides, the multiple excitation of LED light source can induce multiple autofluorescence, and LIAF imager with the rotary filter ring of eight channels can detect the spectral images of multiple narrow bands. The prototype of a portable LIAF imager is applied in the clinical trials for some cases in Taiwan, and the images of the clinical trial with the specific excitation show the significant differences between normal tissue and oral tissue under these cases.

  9. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  10. Automatic creation of three-dimensional avatars

    NASA Astrophysics Data System (ADS)

    Villa-Uriol, Maria-Cruz; Sainz, Miguel; Kuester, Falko; Bagherzadeh, Nader

    2003-01-01

    Highly accurate avatars of humans promise a new level of realism in engineering and entertainment applications, including areas such as computer animated movies, computer game development interactive virtual environments and tele-presence. In order to provide high-quality avatars, new techniques for the automatic acquisition and creation are required. A framework for the capture and construction of arbitrary avatars from image data is presented in this paper. Avatars are automatically reconstructed from multiple static images of a human subject by utilizing image information to reshape a synthetic three-dimensional articulated reference model. A pipeline is presented that combines a set of hardware-accelerated stages into one seamless system. Primary stages in this pipeline include pose estimation, skeleton fitting, body part segmentation, geometry construction and coloring, leading to avatars that can be animated and included into interactive environments. The presented system removes traditional constraints in the initial pose of the captured subject by using silhouette-based modification techniques in combination with a reference model. Results can be obtained in near-real time with very limited user intervention.

  11. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    PubMed

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2018-02-01

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  12. The Ansel Adams zone system: HDR capture and range compression by chemical processing

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2010-02-01

    We tend to think of digital imaging and the tools of PhotoshopTM as a new phenomenon in imaging. We are also familiar with multiple-exposure HDR techniques intended to capture a wider range of scene information, than conventional film photography. We know about tone-scale adjustments to make better pictures. We tend to think of everyday, consumer, silver-halide photography as a fixed window of scene capture with a limited, standard range of response. This description of photography is certainly true, between 1950 and 2000, for instant films and negatives processed at the drugstore. These systems had fixed dynamic range and fixed tone-scale response to light. All pixels in the film have the same response to light, so the same light exposure from different pixels was rendered as the same film density. Ansel Adams, along with Fred Archer, formulated the Zone System, staring in 1940. It was earlier than the trillions of consumer photos in the second half of the 20th century, yet it was much more sophisticated than today's digital techniques. This talk will describe the chemical mechanisms of the zone system in the parlance of digital image processing. It will describe the Zone System's chemical techniques for image synthesis. It also discusses dodging and burning techniques to fit the HDR scene into the LDR print. Although current HDR imaging shares some of the Zone System's achievements, it usually does not achieve all of them.

  13. Efficient multi-site two-photon functional imaging of neuronal circuits.

    PubMed

    Castanares, Michael Lawrence; Gautam, Vini; Drury, Jack; Bachor, Hans; Daria, Vincent R

    2016-12-01

    Two-photon imaging using high-speed multi-channel detectors is a promising approach for optical recording of cellular membrane dynamics at multiple sites. A main bottleneck of this technique is the limited number of photons captured within a short exposure time (~1ms). Here, we implement temporal gating to improve the two-photon fluorescence yield from holographically projected multiple foci whilst maintaining a biologically safe incident average power. We observed up to 6x improvement in the signal-to-noise ratio (SNR) in Fluorescein and cultured hippocampal neurons showing evoked calcium transients. With improved SNR, we could pave the way to achieving multi-site optical recording of fluorogenic probes with response times in the order of ~1ms.

  14. Efficient multi-site two-photon functional imaging of neuronal circuits

    PubMed Central

    Castanares, Michael Lawrence; Gautam, Vini; Drury, Jack; Bachor, Hans; Daria, Vincent R.

    2016-01-01

    Two-photon imaging using high-speed multi-channel detectors is a promising approach for optical recording of cellular membrane dynamics at multiple sites. A main bottleneck of this technique is the limited number of photons captured within a short exposure time (~1ms). Here, we implement temporal gating to improve the two-photon fluorescence yield from holographically projected multiple foci whilst maintaining a biologically safe incident average power. We observed up to 6x improvement in the signal-to-noise ratio (SNR) in Fluorescein and cultured hippocampal neurons showing evoked calcium transients. With improved SNR, we could pave the way to achieving multi-site optical recording of fluorogenic probes with response times in the order of ~1ms. PMID:28018745

  15. DeepSkeleton: Learning Multi-Task Scale-Associated Deep Side Outputs for Object Skeleton Extraction in Natural Images

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan

    2017-11-01

    Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.

  16. Fiber-optic fringe projection with crosstalk reduction by adaptive pattern masking

    NASA Astrophysics Data System (ADS)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2017-02-01

    To enable in-process inspection of industrial manufacturing processes, measuring devices need to fulfill time and space constraints, while also being robust to environmental conditions, such as high temperatures and electromagnetic fields. A new fringe projection profilometry system is being developed, which is capable of performing the inspection of filigree tool geometries, e.g. gearing elements with tip radii of 0.2 mm, inside forming machines of the sheet-bulk metal forming process. Compact gradient-index rod lenses with a diameter of 2 mm allow for a compact design of the sensor head, which is connected to a base unit via flexible high-resolution image fibers with a diameter of 1.7 mm. The base unit houses a flexible DMD based LED projector optimized for fiber coupling and a CMOS camera sensor. The system is capable of capturing up to 150 gray-scale patterns per second as well as high dynamic range images from multiple exposures. Owing to fiber crosstalk and light leakage in the image fiber, signal quality suffers especially when capturing 3-D data of technical surfaces with highly varying reflectance or surface angles. An algorithm is presented, which adaptively masks parts of the pattern to reduce these effects via multiple exposures. The masks for valid surface areas are automatically defined according to different parameters from an initial capture, such as intensity and surface gradient. In a second step, the masks are re-projected to projector coordinates using the mathematical model of the system. This approach is capable of reducing both inter-pixel crosstalk and inter-object reflections on concave objects while maintaining measurement durations of less than 5 s.

  17. Robust image registration for multiple exposure high dynamic range image synthesis

    NASA Astrophysics Data System (ADS)

    Yao, Susu

    2011-03-01

    Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..

  18. Concept of dual-resolution light field imaging using an organic photoelectric conversion film for high-resolution light field photography.

    PubMed

    Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki

    2017-11-01

    Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.

  19. Development of a low cost high precision three-layer 3D artificial compound eye.

    PubMed

    Zhang, Hao; Li, Lei; McCray, David L; Scheiding, Sebastian; Naples, Neil J; Gebhardt, Andreas; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas; Yi, Allen Y

    2013-09-23

    Artificial compound eyes are typically designed on planar substrates due to the limits of current imaging devices and available manufacturing processes. In this study, a high precision, low cost, three-layer 3D artificial compound eye consisting of a 3D microlens array, a freeform lens array, and a field lens array was constructed to mimic an apposition compound eye on a curved substrate. The freeform microlens array was manufactured on a curved substrate to alter incident light beams and steer their respective images onto a flat image plane. The optical design was performed using ZEMAX. The optical simulation shows that the artificial compound eye can form multiple images with aberrations below 11 μm; adequate for many imaging applications. Both the freeform lens array and the field lens array were manufactured using microinjection molding process to reduce cost. Aluminum mold inserts were diamond machined by the slow tool servo method. The performance of the compound eye was tested using a home-built optical setup. The images captured demonstrate that the proposed structures can successfully steer images from a curved surface onto a planar photoreceptor. Experimental results show that the compound eye in this research has a field of view of 87°. In addition, images formed by multiple channels were found to be evenly distributed on the flat photoreceptor. Additionally, overlapping views of the adjacent channels allow higher resolution images to be re-constructed from multiple 3D images taken simultaneously.

  20. Augmented reality based real-time subcutaneous vein imaging system

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian

    2016-01-01

    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed. PMID:27446690

  1. Augmented reality based real-time subcutaneous vein imaging system.

    PubMed

    Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian

    2016-07-01

    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed.

  2. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    PubMed

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  3. Scanning electron microscope automatic defect classification of process induced defects

    NASA Astrophysics Data System (ADS)

    Wolfe, Scott; McGarvey, Steve

    2017-03-01

    With the integration of high speed Scanning Electron Microscope (SEM) based Automated Defect Redetection (ADR) in both high volume semiconductor manufacturing and Research and Development (R and D), the need for reliable SEM Automated Defect Classification (ADC) has grown tremendously in the past few years. In many high volume manufacturing facilities and R and D operations, defect inspection is performed on EBeam (EB), Bright Field (BF) or Dark Field (DF) defect inspection equipment. A comma separated value (CSV) file is created by both the patterned and non-patterned defect inspection tools. The defect inspection result file contains a list of the inspection anomalies detected during the inspection tools' examination of each structure, or the examination of an entire wafers surface for non-patterned applications. This file is imported into the Defect Review Scanning Electron Microscope (DRSEM). Following the defect inspection result file import, the DRSEM automatically moves the wafer to each defect coordinate and performs ADR. During ADR the DRSEM operates in a reference mode, capturing a SEM image at the exact position of the anomalies coordinates and capturing a SEM image of a reference location in the center of the wafer. A Defect reference image is created based on the Reference image minus the Defect image. The exact coordinates of the defect is calculated based on the calculated defect position and the anomalies stage coordinate calculated when the high magnification SEM defect image is captured. The captured SEM image is processed through either DRSEM ADC binning, exporting to a Yield Analysis System (YAS), or a combination of both. Process Engineers, Yield Analysis Engineers or Failure Analysis Engineers will manually review the captured images to insure that either the YAS defect binning is accurately classifying the defects or that the DRSEM defect binning is accurately classifying the defects. This paper is an exploration of the feasibility of the utilization of a Hitachi RS4000 Defect Review SEM to perform Automatic Defect Classification with the objective of the total automated classification accuracy being greater than human based defect classification binning when the defects do not require multiple process step knowledge for accurate classification. The implementation of DRSEM ADC has the potential to improve the response time between defect detection and defect classification. Faster defect classification will allow for rapid response to yield anomalies that will ultimately reduce the wafer and/or the die yield.

  4. Adaptive DOF for plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Oberdörster, Alexander; Lensch, Hendrik P. A.

    2013-03-01

    Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.

  5. Single-Image Distance Measurement by a Smart Mobile Device.

    PubMed

    Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling

    2017-12-01

    Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.

  6. Combining multiple features for color texture classification

    NASA Astrophysics Data System (ADS)

    Cusano, Claudio; Napoletano, Paolo; Schettini, Raimondo

    2016-11-01

    The analysis of color and texture has a long history in image analysis and computer vision. These two properties are often considered as independent, even though they are strongly related in images of natural objects and materials. Correlation between color and texture information is especially relevant in the case of variable illumination, a condition that has a crucial impact on the effectiveness of most visual descriptors. We propose an ensemble of hand-crafted image descriptors designed to capture different aspects of color textures. We show that the use of these descriptors in a multiple classifiers framework makes it possible to achieve a very high classification accuracy in classifying texture images acquired under different lighting conditions. A powerful alternative to hand-crafted descriptors is represented by features obtained with deep learning methods. We also show how the proposed combining strategy hand-crafted and convolutional neural networks features can be used together to further improve the classification accuracy. Experimental results on a food database (raw food texture) demonstrate the effectiveness of the proposed strategy.

  7. Image registration for multi-exposed HDRI and motion deblurring

    NASA Astrophysics Data System (ADS)

    Lee, Seok; Wey, Ho-Cheon; Lee, Seong-Deok

    2009-02-01

    In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after blending. Compared to usual matching problem, registration is more difficult when each image is captured under different photographing conditions. In HDR imaging, we use long and short exposure images, which have different brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy image pair and the amount of motion blur varies from one image to another due to the different exposure times. The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To solve this problem, we applied probabilistic measure such as mutual information to represent similarity between images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence of luminance of mutual information, we proposed a fast and practically useful image registration technique in multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over 90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring cases using hand-held camera.

  8. Multichannel microfluidic chip for rapid and reliable trapping and imaging plant-parasitic nematodes

    NASA Astrophysics Data System (ADS)

    Amrit, Ratthasart; Sripumkhai, Witsaroot; Porntheeraphat, Supanit; Jeamsaksiri, Wutthinan; Tangchitsomkid, Nuchanart; Sutapun, Boonsong

    2013-05-01

    Faster and reliable testing technique to count and identify nematode species resided in plant roots is therefore essential for export control and certification. This work proposes utilizing a multichannel microfluidic chip with an integrated flow-through microfilter to retain the nematodes in a trapping chamber. When trapped, it is rather simple and convenient to capture images of the nematodes and later identify their species by a trained technician. Multiple samples can be tested in parallel using the proposed microfluidic chip therefore increasing number of samples tested per day.

  9. Multi-object model-based multi-atlas segmentation for rodent brains using dense discrete correspondences

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Styner, Martin

    2016-03-01

    The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.

  10. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  11. The suitability of lightfield camera depth maps for coordinate measurement applications

    NASA Astrophysics Data System (ADS)

    Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael

    2015-12-01

    Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.

  12. Noise-free accurate count of microbial colonies by time-lapse shadow image analysis.

    PubMed

    Ogawa, Hiroyuki; Nasu, Senshi; Takeshige, Motomu; Funabashi, Hisakage; Saito, Mikako; Matsuoka, Hideaki

    2012-12-01

    Microbial colonies in food matrices could be counted accurately by a novel noise-free method based on time-lapse shadow image analysis. An agar plate containing many clusters of microbial colonies and/or meat fragments was trans-illuminated to project their 2-dimensional (2D) shadow images on a color CCD camera. The 2D shadow images of every cluster distributed within a 3-mm thick agar layer were captured in focus simultaneously by means of a multiple focusing system, and were then converted to 3-dimensional (3D) shadow images. By time-lapse analysis of the 3D shadow images, it was determined whether each cluster comprised single or multiple colonies or a meat fragment. The analytical precision was high enough to be able to distinguish a microbial colony from a meat fragment, to recognize an oval image as two colonies contacting each other, and to detect microbial colonies hidden under a food fragment. The detection of hidden colonies is its outstanding performance in comparison with other systems. The present system attained accuracy for counting fewer than 5 colonies and is therefore of practical importance. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Context-dependent logo matching and recognition.

    PubMed

    Sahbi, Hichem; Ballan, Lamberto; Serra, Giuseppe; Del Bimbo, Alberto

    2013-03-01

    We contribute, through this paper, to the design of a novel variational framework able to match and recognize multiple instances of multiple reference logos in image archives. Reference logos and test images are seen as constellations of local features (interest points, regions, etc.) and matched by minimizing an energy function mixing: 1) a fidelity term that measures the quality of feature matching, 2) a neighborhood criterion that captures feature co-occurrence/geometry, and 3) a regularization term that controls the smoothness of the matching solution. We also introduce a detection/recognition procedure and study its theoretical consistency. Finally, we show the validity of our method through extensive experiments on the challenging MICC-Logos dataset. Our method overtakes, by 20%, baseline as well as state-of-the-art matching/recognition procedures.

  14. Multiple source associated particle imaging for simultaneous capture of multiple projections

    DOEpatents

    Bingham, Philip R; Hausladen, Paul A; McConchi, Seth M; Mihalczo, John T; Mullens, James A

    2013-11-19

    Disclosed herein are representative embodiments of methods, apparatus, and systems for performing neutron radiography. For example, in one exemplary method, an object is interrogated with a plurality of neutrons. The plurality of neutrons includes a first portion of neutrons generated from a first neutron source and a second portion of neutrons generated from a second neutron source. Further, at least some of the first portion and the second portion are generated during a same time period. In the exemplary method, one or more neutrons from the first portion and one or more neutrons from the second portion are detected, and an image of the object is generated based at least in part on the detected neutrons from the first portion and the detected neutrons from the second portion.

  15. Geometric correction and digital elevation extraction using multiple MTI datasets

    USGS Publications Warehouse

    Mercier, Jeffrey A.; Schowengerdt, Robert A.; Storey, James C.; Smith, Jody L.

    2007-01-01

    Digital Elevation Models (DEMs) are traditionally acquired from a stereo pair of aerial photographs sequentially captured by an airborne metric camera. Standard DEM extraction techniques can be naturally extended to satellite imagery, but the particular characteristics of satellite imaging can cause difficulties. The spacecraft ephemeris with respect to the ground site during image collects is the most important factor in the elevation extraction process. When the angle of separation between the stereo images is small, the extraction process typically produces measurements with low accuracy, while a large angle of separation can cause an excessive number of erroneous points in the DEM from occlusion of ground areas. The use of three or more images registered to the same ground area can potentially reduce these problems and improve the accuracy of the extracted DEM. The pointing capability of some sensors, such as the Multispectral Thermal Imager (MTI), allows for multiple collects of the same area from different perspectives. This functionality of MTI makes it a good candidate for the implementation of a DEM extraction algorithm using multiple images for improved accuracy. Evaluation of this capability and development of algorithms to geometrically model the MTI sensor and extract DEMs from multi-look MTI imagery are described in this paper. An RMS elevation error of 6.3-meters is achieved using 11 ground test points, while the MTI band has a 5-meter ground sample distance.

  16. Wind Erosion

    NASA Image and Video Library

    2015-07-02

    Long term winds have etched the surface in Memnonia Sulci. Partial cemented surface materials are easily eroded by the wind, forming linear ridges called yardangs. The multiple direction of yardangs in this VIS image indicate that there were at least two different wind directions in this area. Orbit Number: 59217 Latitude: -8.33112 Longitude: 186.506 Instrument: VIS Captured: 2015-04-20 15:12 http://photojournal.jpl.nasa.gov/catalog/PIA19502

  17. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  18. Subpixel based defocused points removal in photon-limited volumetric dataset

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Maraka, Harsha Vardhan R.; Ryle, James P.; Sheridan, John T.

    2017-03-01

    The asymptotic property of the maximum likelihood estimator (MLE) has been utilized to reconstruct three-dimensional (3D) sectional images in the photon counting imaging (PCI) regime. At first, multiple 2D intensity images, known as Elemental images (EI), are captured. Then the geometric ray-tracing method is employed to reconstruct the 3D sectional images at various depth cues. We note that a 3D sectional image consists of both focused and defocused regions, depending on the reconstructed depth position. The defocused portion is redundant and should be removed in order to facilitate image analysis e.g., 3D object tracking, recognition, classification and navigation. In this paper, we present a subpixel level three-step based technique (i.e. involving adaptive thresholding, boundary detection and entropy based segmentation) to discard the defocused sparse-samples from the reconstructed photon-limited 3D sectional images. Simulation results are presented demonstrating the feasibility and efficiency of the proposed method.

  19. Visible camera imaging of plasmas in Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  20. Plenoptic background oriented schlieren imaging

    NASA Astrophysics Data System (ADS)

    Klemkowsky, Jenna N.; Fahringer, Timothy W.; Clifford, Christopher J.; Bathel, Brett F.; Thurow, Brian S.

    2017-09-01

    The combination of the background oriented schlieren (BOS) technique with the unique imaging capabilities of a plenoptic camera, termed plenoptic BOS, is introduced as a new addition to the family of schlieren techniques. Compared to conventional single camera BOS, plenoptic BOS is capable of sampling multiple lines-of-sight simultaneously. Displacements from each line-of-sight are collectively used to build a four-dimensional displacement field, which is a vector function structured similarly to the original light field captured in a raw plenoptic image. The displacement field is used to render focused BOS images, which qualitatively are narrow depth of field slices of the density gradient field. Unlike focused schlieren methods that require manually changing the focal plane during data collection, plenoptic BOS synthetically changes the focal plane position during post-processing, such that all focal planes are captured in a single snapshot. Through two different experiments, this work demonstrates that plenoptic BOS is capable of isolating narrow depth of field features, qualitatively inferring depth, and quantitatively estimating the location of disturbances in 3D space. Such results motivate future work to transition this single-camera technique towards quantitative reconstructions of 3D density fields.

  1. DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh

    2014-10-01

    The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.

  2. A method to perform a fast fourier transform with primitive image transformations.

    PubMed

    Sheridan, Phil

    2007-05-01

    The Fourier transform is one of the most important transformations in image processing. A major component of this influence comes from the ability to implement it efficiently on a digital computer. This paper describes a new methodology to perform a fast Fourier transform (FFT). This methodology emerges from considerations of the natural physical constraints imposed by image capture devices (camera/eye). The novel aspects of the specific FFT method described include: 1) a bit-wise reversal re-grouping operation of the conventional FFT is replaced by the use of lossless image rotation and scaling and 2) the usual arithmetic operations of complex multiplication are replaced with integer addition. The significance of the FFT presented in this paper is introduced by extending a discrete and finite image algebra, named Spiral Honeycomb Image Algebra (SHIA), to a continuous version, named SHIAC.

  3. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  4. The use of XFEM to assess the influence of intra-cortical porosity on crack propagation.

    PubMed

    Rodriguez-Florez, Naiara; Carriero, Alessandra; Shefelbine, Sandra J

    2017-03-01

    This study aimed at using eXtended finite element method (XFEM) to characterize crack growth through bone's intra-cortical pores. Two techniques were compared using Abaqus: (1) void material properties were assigned to pores; (2) multiple enrichment regions with independent crack-growth possibilities were employed. Both were applied to 2D models of transverse images of mouse bone with differing porous structures. Results revealed that assigning multiple enrichment regions allows for multiple cracks to be initiated progressively, which cannot be captured when the voids are filled. Therefore, filling pores with one enrichment region in the model will not create realistic fracture patterns in Abaqus-XFEM.

  5. Automated camera-phone experience with the frequency of imaging necessary to capture diet.

    PubMed

    Arab, Lenore; Winter, Ashley

    2010-08-01

    Camera-enabled cell phones provide an opportunity to strengthen dietary recall through automated imaging of foods eaten during a specified period. To explore the frequency of imaging needed to capture all foods eaten, we examined the number of images of individual foods consumed in a pilot study of automated imaging using camera phones set to an image-capture frequency of one snapshot every 10 seconds. Food images were tallied from 10 young adult subjects who wore the phone continuously during the work day and consented to share their images. Based on the number of images received for each eating experience, the pilot data suggest that automated capturing of images at a frequency of once every 10 seconds is adequate for recording foods consumed during regular meals, whereas a greater frequency of imaging is necessary to capture snacks and beverages eaten quickly. 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.

  6. Three-dimensional reconstruction of rat knee joint using episcopic fluorescence image capture.

    PubMed

    Takaishi, R; Aoyama, T; Zhang, X; Higuchi, S; Yamada, S; Takakuwa, T

    2014-10-01

    Development of the knee joint was morphologically investigated, and the process of cavitation was analyzed by using episcopic fluorescence image capture (EFIC) to create spatial and temporal three-dimensional (3D) reconstructions. Knee joints of Wister rat embryos between embryonic day (E)14 and E20 were investigated. Samples were sectioned and visualized using an EFIC. Then, two-dimensional image stacks were reconstructed using OsiriX software, and 3D reconstructions were generated using Amira software. Cavitations of the knee joint were constructed from five divided portions. Cavity formation initiated at multiple sites at E17; among them, the femoropatellar cavity (FPC) was the first. Cavitations of the medial side preceded those of the lateral side. Each cavity connected at E20 when cavitations around the anterior cruciate ligament (ACL) and posterior cruciate ligament (PCL) were completed. Cavity formation initiated from six portions. In each portion, development proceeded asymmetrically. These results concerning anatomical development of the knee joint using EFIC contribute to a better understanding of the structural feature of the knee joint. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  7. Evaluation of ultrasonic array imaging algorithms for inspection of a coarse grained material

    NASA Astrophysics Data System (ADS)

    Van Pamel, A.; Lowe, M. J. S.; Brett, C. R.

    2014-02-01

    Improving the ultrasound inspection capability for coarse grain metals remains of longstanding interest to industry and the NDE research community and is expected to become increasingly important for next generation power plants. A test sample of coarse grained Inconel 625 which is representative of future power plant components has been manufactured to test the detectability of different inspection techniques. Conventional ultrasonic A, B, and C-scans showed the sample to be extraordinarily difficult to inspect due to its scattering behaviour. However, in recent years, array probes and Full Matrix Capture (FMC) imaging algorithms, which extract the maximum amount of information possible, have unlocked exciting possibilities for improvements. This article proposes a robust methodology to evaluate the detection performance of imaging algorithms, applying this to three FMC imaging algorithms; Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Decomposition of the Time Reversal Operator with Multiple Scattering (DORT MSF). The methodology considers the statistics of detection, presenting the detection performance as Probability of Detection (POD) and probability of False Alarm (PFA). The data is captured in pulse-echo mode using 64 element array probes at centre frequencies of 1MHz and 5MHz. All three algorithms are shown to perform very similarly when comparing their flaw detection capabilities on this particular case.

  8. Gross feature recognition of Anatomical Images based on Atlas grid (GAIA): Incorporating the local discrepancy between an atlas and a target image to capture the features of anatomic brain MRI.

    PubMed

    Qin, Yuan-Yuan; Hsu, Johnny T; Yoshida, Shoko; Faria, Andreia V; Oishi, Kumiko; Unschuld, Paul G; Redgrave, Graham W; Ying, Sarah H; Ross, Christopher A; van Zijl, Peter C M; Hillis, Argye E; Albert, Marilyn S; Lyketsos, Constantine G; Miller, Michael I; Mori, Susumu; Oishi, Kenichi

    2013-01-01

    We aimed to develop a new method to convert T1-weighted brain MRIs to feature vectors, which could be used for content-based image retrieval (CBIR). To overcome the wide range of anatomical variability in clinical cases and the inconsistency of imaging protocols, we introduced the Gross feature recognition of Anatomical Images based on Atlas grid (GAIA), in which the local intensity alteration, caused by pathological (e.g., ischemia) or physiological (development and aging) intensity changes, as well as by atlas-image misregistration, is used to capture the anatomical features of target images. As a proof-of-concept, the GAIA was applied for pattern recognition of the neuroanatomical features of multiple stages of Alzheimer's disease, Huntington's disease, spinocerebellar ataxia type 6, and four subtypes of primary progressive aphasia. For each of these diseases, feature vectors based on a training dataset were applied to a test dataset to evaluate the accuracy of pattern recognition. The feature vectors extracted from the training dataset agreed well with the known pathological hallmarks of the selected neurodegenerative diseases. Overall, discriminant scores of the test images accurately categorized these test images to the correct disease categories. Images without typical disease-related anatomical features were misclassified. The proposed method is a promising method for image feature extraction based on disease-related anatomical features, which should enable users to submit a patient image and search past clinical cases with similar anatomical phenotypes.

  9. In vivo imaging of the neurovascular unit in CNS disease

    PubMed Central

    Merlini, Mario; Davalos, Dimitrios; Akassoglou, Katerina

    2014-01-01

    The neurovascular unit—comprised of glia, pericytes, neurons and cerebrovasculature—is a dynamic interface that ensures physiological central nervous system (CNS) functioning. In disease dynamic remodeling of the neurovascular interface triggers a cascade of responses that determine the extent of CNS degeneration and repair. The dynamics of these processes can be adequately captured by imaging in vivo, which allows the study of cellular responses to environmental stimuli and cell-cell interactions in the living brain in real time. This perspective focuses on intravital imaging studies of the neurovascular unit in stroke, multiple sclerosis (MS) and Alzheimer disease (AD) models and discusses their potential for identifying novel therapeutic targets. PMID:25197615

  10. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-05

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  11. Magnetic induction tomography of objects for security applications

    NASA Astrophysics Data System (ADS)

    Ward, Rob; Joseph, Max; Langley, Abbi; Taylor, Stuart; Watson, Joe C.

    2017-10-01

    A coil array imaging system has been further developed from previous investigations, focusing on designing its application for fast screening of small bags or parcels, with a view to the production of a compact instrument for security applications. In addition to reducing image acquisition times, work was directed toward exploring potential cost effective manufacturing routes. Based on magnetic induction tomography and eddy-current principles, the instrument captured images of conductive targets using a lock-in amplifier, individually multiplexing signals between a primary driver coil and a 20 by 21 imaging array of secondary passive coils constructed using a reproducible multiple tile design. The design was based on additive manufacturing techniques and provided 2 orthogonal imaging planes with an ability to reconstruct images in less than 10 seconds. An assessment of one of the imaging planes is presented. This technique potentially provides a cost effective threat evaluation technique that may compliment conventional radiographic approaches.

  12. Performance assessment of multi-frequency processing of ICU chest images for enhanced visualization of tubes and catheters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.

    2008-03-01

    An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.

  13. A broken heart: right ventricular rupture after blunt cardiac injury.

    PubMed

    Nabeel, Muhammad; Williams, Kim Allan

    2013-01-01

    A 68 year old woman who was a restrained driver was brought to the hospital after sustaining severe motor vehicle accident. She underwent CT of the chest demonstrating pulmonary infiltrates, multiple rib fractures, bilateral hemo- and pneumothoraces. Subsequent review of the images noted contrast extravasating from the apical portion of the right ventricle into the pericardial space, demonstrating a confined rupture of right ventricle. Cardiac rupture is a common complication of a rare event and there are few examples in the imaging literature capturing such event. Copyright © 2013 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  14. Diffuse optical microscopy for quantification of depth-dependent epithelial backscattering in the cervix

    NASA Astrophysics Data System (ADS)

    Bodenschatz, Nico; Lam, Sylvia; Carraro, Anita; Korbelik, Jagoda; Miller, Dianne M.; McAlpine, Jessica N.; Lee, Marette; Kienle, Alwin; MacAulay, Calum

    2016-06-01

    A fiber optic imaging approach is presented using structured illumination for quantification of almost pure epithelial backscattering. We employ multiple spatially modulated projection patterns and camera-based reflectance capture to image depth-dependent epithelial scattering. The potential diagnostic value of our approach is investigated on cervical ex vivo tissue specimens. Our study indicates a strong backscattering increase in the upper part of the cervical epithelium caused by dysplastic microstructural changes. Quantization of relative depth-dependent backscattering is confirmed as a potentially useful diagnostic feature for detection of precancerous lesions in cervical squamous epithelium.

  15. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  16. Joint Labeling Of Multiple Regions of Interest (Rois) By Enhanced Auto Context Models.

    PubMed

    Kim, Minjeong; Wu, Guorong; Guo, Yanrong; Shen, Dinggang

    2015-04-01

    Accurate segmentation of a set of regions of interest (ROIs) in the brain images is a key step in many neuroscience studies. Due to the complexity of image patterns, many learning-based segmentation methods have been proposed, including auto context model (ACM) that can capture high-level contextual information for guiding segmentation. However, since current ACM can only handle one ROI at a time, neighboring ROIs have to be labeled separately with different ACMs that are trained independently without communicating each other. To address this, we enhance the current single-ROI learning ACM to multi-ROI learning ACM for joint labeling of multiple neighboring ROIs (called e ACM). First, we extend current independently-trained single-ROI ACMs to a set of jointly-trained cross-ROI ACMs, by simultaneous training of ACMs for all spatially-connected ROIs to let them to share their respective intermediate outputs for coordinated labeling of each image point. Then, the context features in each ACM can capture the cross-ROI dependence information from the outputs of other ACMs that are designed for neighboring ROIs. Second, we upgrade the output labeling map of each ACM with the multi-scale representation, thus both local and global context information can be effectively used to increase the robustness in characterizing geometric relationship among neighboring ROIs. Third, we integrate ACM into a multi-atlases segmentation paradigm, for encompassing high variations among subjects. Experiments on LONI LPBA40 dataset show much better performance by our e ACM, compared to the conventional ACM.

  17. AMUC: Associated Motion capture User Categories.

    PubMed

    Norman, Sally Jane; Lawson, Sian E M; Olivier, Patrick; Watson, Paul; Chan, Anita M-A; Dade-Robertson, Martyn; Dunphy, Paul; Green, Dave; Hiden, Hugo; Hook, Jonathan; Jackson, Daniel G

    2009-07-13

    The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.

  18. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.

  19. Metalloporphyrins and their uses as radiosensitizers for radiation therapy

    DOEpatents

    Miura, Michiko; Slatkin, Daniel N.

    2004-07-06

    The present invention covers radiosensitizers containing as an active ingredient halogenated derivatives of boronated porphyrins containing multiple carborane cages having the structure ##STR1## which selectively accumulate in neoplastic tissue within the irradiation volume and thus can be used in cancer therapies including, but not limited to, boron neutron--capture therapy and photodynamic therapy. The present invention also covers methods for using these radiosensitizers in tumor imaging and cancer treatment.

  20. Imaging of enzyme activity using bio-LSI system enables simultaneous immunosensing of different analytes in multiple specimens.

    PubMed

    Hokuto, Toshiki; Yasukawa, Tomoyuki; Kunikata, Ryota; Suda, Atsushi; Inoue, Kumi Y; Ino, Kosuke; Matsue, Tomokazu; Mizutani, Fumio

    2016-06-01

    Electrochemical imaging is an excellent technique to characterize an activity of biomaterials, such as enzymes and cells. Large scale integration-based amperometric sensor (Bio-LSI) has been developed for the simultaneous and continuous detection of the concentration distribution of redox species generated by reactions of biomolecules. In this study, the Bio-LSI system was demonstrated to be applicable for simultaneous detection of different anaytes in multiple specimens. The multiple specimens containing human immunoglobulin G (hIgG) and mouse IgG (mIgG) were introduced into each channel of the upper substrate across the antibody lines for hIgG and mIgG on the lower substrate. Hydrogen peroxide generated by the enzyme reaction of glucose oxidase captured at intersections was simultaneously detected by 400 microelectrodes of Bio-LSI chip. The oxidation current increased with increasing the concentrations of hIgG, which can be detected in the range of 0.01-1.0 µg mL(-1) . Simultaneous detection of hIgG and mIgG in multiple specimens was achieved by using line pattern of both antibodies. Therefore, the presence of different target molecules in the multiple samples would be quantitatively and simultaneously visualized as a current image by the Bio-LSI system. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Capturing the worlds of multiple sclerosis: Hannah Laycock's photography.

    PubMed

    Bolaki, Stella

    2017-03-01

    This essay explores UK photographer Hannah Laycock's Awakenings and, to a lesser extent, Perceiving Identity that were created in 2015, following her diagnosis with multiple sclerosis (MS) in 2013. It draws on scholarship by people with chronic illness while situating these two MS projects in the context of Laycock's earlier art and portrait photography dealing with fragility, image and desire, and power relations between subject and observer. The analysis illustrates how her evocative photography captures the lived or subjective experience of an invisible and often misunderstood condition by initially focusing on the tension between transparency and opacity in her work. It further shows how her images counter dominant didactic metaphors such as, 'the body as machine', that perpetuate the dehumanising and objectifying aspects of medical care. Subsequent sections trace the influence that Oliver Sacks has had on Laycock's practice, and reflect on other metaphors and tropes in Awakenings that illuminate the relationship between body and self in MS. The essay concludes by acknowledging the therapeutic power of art and calling upon health professionals to make more use of such artistic work in clinical practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  3. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  4. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.

  5. 77 FR 4059 - Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof; Receipt...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... Images, and Components Thereof; Receipt of Complaint; Solicitation of Comments Relating to the Public... Devices for Capturing and Transmitting Images, and Components Thereof, DN 2869; the Commission is... importation of certain electronic devices for capturing and transmitting images, and components thereof. The...

  6. Snapshot hyperspectral retinal imaging using compact spectral resolving detector array.

    PubMed

    Li, Hao; Liu, Wenzhong; Dong, Biqin; Kaluzny, Joel V; Fawzi, Amani A; Zhang, Hao F

    2017-06-01

    Hyperspectral retinal imaging captures the light spectrum from each imaging pixel. It provides spectrally encoded retinal physiological and morphological information, which could potentially benefit diagnosis and therapeutic monitoring of retinal diseases. The key challenges in hyperspectral retinal imaging are how to achieve snapshot imaging to avoid motions between the images from multiple spectral bands, and how to design a compact snapshot imager suitable for clinical use. Here, we developed a compact, snapshot hyperspectral fundus camera for rodents using a novel spectral resolving detector array (SRDA), on which a thin-film Fabry-Perot cavity filter was monolithically fabricated on each imaging pixel. We achieved hyperspectral retinal imaging with 16 wavelength bands (460 to 630 nm) at 20 fps. We also demonstrated false-color vessel contrast enhancement and retinal oxygen saturation (sO 2 ) measurement through spectral analysis. This work could potentially bring hyperspectral retinal imaging from bench to bedside. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  8. Wavelength Comparison

    NASA Image and Video Library

    2016-10-27

    The difference in features that are visible in different wavelengths of extreme ultraviolet light can be stunning as we see when we compare very large coronal holes, easily seen in the AIA 171 image (colorized bronze) yet hardly perceptible in the AIA 304 image (colorized red). Both were taken at just about the same time (Oct. 27, 2016). Coronal holes are areas of open magnetic field that carry solar wind out into space. In fact, these holes are currently causing a lot of geomagnetic activity here on Earth. The bronze image wavelength captures material that is much hotter and further up in the corona than the red image. The comparison dramatizes the value of observing the sun in multiple wavelengths of light. Movies are available at http://photojournal.jpl.nasa.gov/catalog/PIA15377

  9. The clinico-radiological paradox of cognitive function and MRI burden of white matter lesions in people with multiple sclerosis: A systematic review and meta-analysis.

    PubMed

    Mollison, Daisy; Sellar, Robin; Bastin, Mark; Mollison, Denis; Chandran, Siddharthan; Wardlaw, Joanna; Connick, Peter

    2017-01-01

    Moderate correlation exists between the imaging quantification of brain white matter lesions and cognitive performance in people with multiple sclerosis (MS). This may reflect the greater importance of other features, including subvisible pathology, or methodological limitations of the primary literature. To summarise the cognitive clinico-radiological paradox and explore the potential methodological factors that could influence the assessment of this relationship. Systematic review and meta-analysis of primary research relating cognitive function to white matter lesion burden. Fifty papers met eligibility criteria for review, and meta-analysis of overall results was possible in thirty-two (2050 participants). Aggregate correlation between cognition and T2 lesion burden was r = -0.30 (95% confidence interval: -0.34, -0.26). Wide methodological variability was seen, particularly related to key factors in the cognitive data capture and image analysis techniques. Resolving the persistent clinico-radiological paradox will likely require simultaneous evaluation of multiple components of the complex pathology using optimum measurement techniques for both cognitive and MRI feature quantification. We recommend a consensus initiative to support common standards for image analysis in MS, enabling benchmarking while also supporting ongoing innovation.

  10. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.

  11. Terrain detection and classification using single polarization SAR

    DOEpatents

    Chow, James G.; Koch, Mark W.

    2016-01-19

    The various technologies presented herein relate to identifying manmade and/or natural features in a radar image. Two radar images (e.g., single polarization SAR images) can be captured for a common scene. The first image is captured at a first instance and the second image is captured at a second instance, whereby the duration between the captures are of sufficient time such that temporal decorrelation occurs for natural surfaces in the scene, and only manmade surfaces, e.g., a road, produce correlated pixels. A LCCD image comprising the correlated and decorrelated pixels can be generated from the two radar images. A median image can be generated from a plurality of radar images, whereby any features in the median image can be identified. A superpixel operation can be performed on the LCCD image and the median image, thereby enabling a feature(s) in the LCCD image to be classified.

  12. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  13. Platform control for space-based imaging: the TOPSAT mission

    NASA Astrophysics Data System (ADS)

    Dungate, D.; Morgan, C.; Hardacre, S.; Liddle, D.; Cropp, A.; Levett, W.; Price, M.; Steyn, H.

    2004-11-01

    This paper describes the imaging mode ADCS design for the TOPSAT satellite, an Earth observation demonstration mission targeted at military applications. The baselined orbit for TOPSAT is a 600-700km sun synchronous orbit from which images up to 30° off track can be captured. For this baseline, the imaging camera proves a resolution of 2.5m and a nominal image size of 15x15km. The ADCS design solution for the imaging mode uses a moving demand approach to enable a single control algorithm solution for both the preparatory reorientation prior to image capture and the post capture return to nadir pointing. During image capture proper, control is suspended to minimise the disturbances experienced by the satellite from the wheels. Prior to each imaging sequence, the moving demand attitude and rate profiles are calculated such that the correct attitude and rate are achieved at the correct orbital position, enabling the correct target area to be captured.

  14. Development of Real-Time Image and In Situ Data Analysis at Sea

    DTIC Science & Technology

    1991-10-16

    for continuous capture from multiple satellites. The Blackhole System is the analysis machine used either by researchers to process/analyze their...Orbital Tracker and the antenna subsystem was overhauled. THE BLACKHOLE ANALYSIS SYSTEM A new HP9000/350 workstation was installed at SSOC to perform...L 4)L Scripps Satellite Oceanography Center Blackhole System Diagram (Analysis Machine) HP 350 Workstation Motorola 68020 CPU 2 - 512 MB hard disks

  15. Optical Theory Improvements to Space Domain Awareness

    DTIC Science & Technology

    2016-09-15

    to other portions of system design. These design components include the Field of View (FoV) of the telescope and the physical dimensions of the system...trying to capture physical characteristics of the object being imaged, and the blurring caused by the atmosphere degrades and limits this capability...experimentally verified in multiple physical experiments [53, 54]. The drawback to these methods is that they assume that the noise caused by the atmosphere is

  16. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  17. Model observer design for multi-signal detection in the presence of anatomical noise

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Park, Subok

    2017-02-01

    As psychophysical studies are resource-intensive to conduct, model observers are commonly used to assess and optimize medical imaging quality. Model observers are typically designed to detect at most one signal. However, in clinical practice, there may be multiple abnormalities in a single image set (e.g. multifocal multicentric (MFMC) breast cancer), which can impact treatment planning. Prevalence of signals can be different across anatomical regions, and human observers do not know the number or location of signals a priori. As new imaging techniques have the potential to improve multiple-signal detection (e.g. digital breast tomosynthesis may be more effective for diagnosis of MFMC than mammography), image quality assessment approaches addressing such tasks are needed. In this study, we present a model observer to detect multiple signals in an image dataset. A novel implementation of partial least squares (PLS) was developed to estimate different sets of efficient channels directly from the images. The PLS channels are adaptive to the characteristics of signals and the background, and they capture the interactions among signal locations. Corresponding linear decision templates are employed to generate both image-level and location-specific scores on the presence of signals. Our results show that: (1) the model observer can achieve high performance with a reasonably small number of channels; (2) the model observer with PLS channels outperforms that with benchmark modified Laguerre-Gauss channels, especially when realistic signal shapes and complex background statistics are involved; (3) the tasks of clinical interest, and other constraints such as sample size would alter the optimal design of the model observer.

  18. Vancouver, Canada 2010

    NASA Image and Video Library

    2017-12-08

    The Thematic Mapper on the Landsat 5 satellite captured this image of Vancouver on September 7, 2011. Flowing through braided channels, the Fraser River meanders toward the sea, emptying through multiple outlets. Moe info: earthobservatory.nasa.gov/IOTD/view.php?id=77368 NASA Earth Observatory image created by Robert Simmon and Jesse Allen, using Landsat data provided by the United States Geological Survey. Instrument: Landsat 5 - TM Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  19. An Assessment of Stream Confluence Flow Dynamics using Large Scale Particle Image Velocimetry Captured from Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Lewis, Q. W.; Rhoads, B. L.

    2017-12-01

    The merging of rivers at confluences results in complex three-dimensional flow patterns that influence sediment transport, bed morphology, downstream mixing, and physical habitat conditions. The capacity to characterize comprehensively flow at confluences using traditional sensors, such as acoustic Doppler velocimeters and profiles, is limited by the restricted spatial resolution of these sensors and difficulties in measuring velocities simultaneously at many locations within a confluence. This study assesses two-dimensional surficial patterns of flow structure at a small stream confluence in Illinois, USA, using large scale particle image velocimetry (LSPIV) derived from videos captured by unmanned aerial systems (UAS). The method captures surface velocity patterns at high spatial and temporal resolution over multiple scales, ranging from the entire confluence to details of flow within the confluence mixing interface. Flow patterns at high momentum ratio are compared to flow patterns when the two incoming flows have nearly equal momentum flux. Mean surface flow patterns during the two types of events provide details on mean patterns of surface flow in different hydrodynamic regions of the confluence and on changes in these patterns with changing momentum flux ratio. LSPIV data derived from the highest resolution imagery also reveal general characteristics of large-scale vortices that form along the shear layer between the flows during the high-momentum ratio event. The results indicate that the use of LSPIV and UAS is well-suited for capturing in detail mean surface patterns of flow at small confluences, but that characterization of evolving turbulent structures is limited by scale considerations related to structure size, image resolution, and camera instability. Complementary methods, including camera platforms mounted at fixed positions close to the water surface, provide opportunities to accurately characterize evolving turbulent flow structures in confluences.

  20. A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image

    PubMed Central

    Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping

    2016-01-01

    Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. PMID:26907289

  1. [3D-imaging and analysis for plastic surgery by smartphone and tablet: an alternative to professional systems?].

    PubMed

    Koban, K C; Leitsch, S; Holzbach, T; Volkmer, E; Metz, P M; Giunta, R E

    2014-04-01

    A new approach of using photographs from smartphones for three-dimensional (3D) imaging was introduced besides the standard high quality 3D camera systems. In this work, we investigated different capture preferences and compared the accuracy of this 3D reconstruction method with manual tape measurement and an established commercial 3D camera system. The facial region of one plastic mannequin head was labelled with 21 landmarks. A 3D reference model was captured with the Vectra 3D Imaging System®. In addition, 3D imaging was executed with the Autodesk 123d Catch® application using 16, 12, 9, 6 and 3 pictures from Apple® iPhone 4 s® and iPad® 3rd generation. The accuracy of 3D reconstruction was measured in 2 steps. First, 42 distance measurements from manual tape measurement and the 2 digital systems were compared. Second, the surface-to-surface deviation of different aesthetic units from the Vectra® reference model to Catch® generated models was analysed. For each 3D system the capturing and processing time was measured. The measurement showed no significant (p>0.05) difference between manual tape measurement and both digital distances from the Catch® application and Vectra®. Surface-to-surface deviation to the Vectra® reference model showed sufficient results for the 3D reconstruction of Catch® with 16, 12 and 9 picture sets. Use of 6 and 3 pictures resulted in large deviations. Lateral aesthetic units showed higher deviations than central units. Catch® needed 5 times longer to capture and compute 3D models (average 10 min vs. 2 min). The Autodesk 123d Catch® computed models suggests good accuracy of the 3D reconstruction for a standard mannequin model, in comparison to manual tape measurement and the surface-to-surface analysis with a 3D reference model. However, the prolonged capture time with multiple pictures is prone to errors. Further studies are needed to investigate its application and quality in capturing volunteer models. Soon mobile applications may offer an alternative for plastic surgeons to today's cost intensive, stationary 3D camera systems. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Chromatic confocal microscopy for multi-depth imaging of epithelial tissue

    PubMed Central

    Olsovsky, Cory; Shelton, Ryan; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.

    2013-01-01

    We present a novel chromatic confocal microscope capable of volumetric reflectance imaging of microstructure in non-transparent tissue. Our design takes advantage of the chromatic aberration of aspheric lenses that are otherwise well corrected. Strong chromatic aberration, generated by multiple aspheres, longitudinally disperses supercontinuum light onto the sample. The backscattered light detected with a spectrometer is therefore wavelength encoded and each spectrum corresponds to a line image. This approach obviates the need for traditional axial mechanical scanning techniques that are difficult to implement for endoscopy and susceptible to motion artifact. A wavelength range of 590-775 nm yielded a >150 µm imaging depth with ~3 µm axial resolution. The system was further demonstrated by capturing volumetric images of buccal mucosa. We believe these represent the first microstructural images in non-transparent biological tissue using chromatic confocal microscopy that exhibit long imaging depth while maintaining acceptable resolution for resolving cell morphology. Miniaturization of this optical system could bring enhanced speed and accuracy to endomicroscopic in vivo volumetric imaging of epithelial tissue. PMID:23667789

  3. Compressive light field imaging

    NASA Astrophysics Data System (ADS)

    Ashok, Amit; Neifeld, Mark A.

    2010-04-01

    Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.

  4. Can we match ultraviolet face images against their visible counterparts?

    NASA Astrophysics Data System (ADS)

    Narang, Neeru; Bourlai, Thirimachos; Hornak, Lawrence A.

    2015-05-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.

  5. 78 FR 16531 - Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-15

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-831] Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof; Commission Determination Not To Review an Initial... certain electronic devices for capturing and transmitting images, and components thereof. The complaint...

  6. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  7. [Medical image compression: a review].

    PubMed

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  8. Super resolution for astronomical observations

    NASA Astrophysics Data System (ADS)

    Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng

    2018-05-01

    In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.

  9. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  10. Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.

    PubMed

    Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael

    2016-11-01

    To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.

  11. Simple Smartphone-Based Guiding System for Visually Impaired People

    PubMed Central

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-01-01

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811

  12. Simple Smartphone-Based Guiding System for Visually Impaired People.

    PubMed

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-06-13

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.

  13. Tracking change over time

    USGS Publications Warehouse

    ,

    2011-01-01

    Landsat satellites capture images of Earth from space-and have since 1972! These images provide a long-term record of natural and human-induced changes on the global landscape. Comparing images from multiple years reveals slow and subtle changes as well as rapid and devastating ones. Landsat images are available over the Internet at no charge. Using the free software MultiSpec, students can track changes to the landscape over time-just like remote sensing scientists do! The objective of the Tracking Change Over Time lesson plan is to get students excited about studying the changing Earth. Intended for students in grades 5-8, the lesson plan is flexible and may be used as a student self-guided tutorial or as a teacher-led class lesson. Enhance students' learning of geography, map reading, earth science, and problem solving by seeing landscape changes from space.

  14. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  15. SkySat-1: very high-resolution imagery from a small satellite

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk

    2014-10-01

    This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.

  16. Characterization of cervigram image sharpness using multiple self-referenced measurements and random forest classifiers

    NASA Astrophysics Data System (ADS)

    Jaiswal, Mayoore; Horning, Matt; Hu, Liming; Ben-Or, Yau; Champlin, Cary; Wilson, Benjamin; Levitz, David

    2018-02-01

    Cervical cancer is the fourth most common cancer among women worldwide and is especially prevalent in low resource settings due to lack of screening and treatment options. Visual inspection with acetic acid (VIA) is a widespread and cost-effective screening method for cervical pre-cancer lesions, but accuracy depends on the experience level of the health worker. Digital cervicography, capturing images of the cervix, enables review by an off-site expert or potentially a machine learning algorithm. These reviews require images of sufficient quality. However, image quality varies greatly across users. A novel algorithm was developed to evaluate the sharpness of images captured with the MobileODT's digital cervicography device (EVA System), in order to, eventually provide feedback to the health worker. The key challenges are that the algorithm evaluates only a single image of each cervix, it needs to be robust to the variability in cervix images and fast enough to run in real time on a mobile device, and the machine learning model needs to be small enough to fit on a mobile device's memory, train on a small imbalanced dataset and run in real-time. In this paper, the focus scores of a preprocessed image and a Gaussian-blurred version of the image are calculated using established methods and used as features. A feature selection metric is proposed to select the top features which were then used in a random forest classifier to produce the final focus score. The resulting model, based on nine calculated focus scores, achieved significantly better accuracy than any single focus measure when tested on a holdout set of images. The area under the receiver operating characteristics curve was 0.9459.

  17. Improved depth estimation with the light field camera

    NASA Astrophysics Data System (ADS)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  18. Time-of-Flight Microwave Camera

    PubMed Central

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-01-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598

  19. Time-of-Flight Microwave Camera

    NASA Astrophysics Data System (ADS)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  20. Crystal surface analysis using matrix textural features classified by a probabilistic neural network

    NASA Astrophysics Data System (ADS)

    Sawyer, Curry R.; Quach, Viet; Nason, Donald; van den Berg, Lodewijk

    1991-12-01

    A system is under development in which surface quality of a growing bulk mercuric iodide crystal is monitored by video camera at regular intervals for early detection of growth irregularities. Mercuric iodide single crystals are employed in radiation detectors. A microcomputer system is used for image capture and processing. The digitized image is divided into multiple overlapping sub-images and features are extracted from each sub-image based on statistical measures of the gray tone distribution, according to the method of Haralick. Twenty parameters are derived from each sub-image and presented to a probabilistic neural network (PNN) for classification. This number of parameters was found to be optimal for the system. The PNN is a hierarchical, feed-forward network that can be rapidly reconfigured as additional training data become available. Training data is gathered by reviewing digital images of many crystals during their growth cycle and compiling two sets of images, those with and without irregularities.

  1. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  2. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  3. Improving the image discontinuous problem by using color temperature mapping method

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming

    2011-09-01

    This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.

  4. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. Lee

    1991-01-01

    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.

  5. Multiple-predators-based capture process on complex networks

    NASA Astrophysics Data System (ADS)

    Ramiz Sharafat, Rajput; Pu, Cunlai; Li, Jie; Chen, Rongbin; Xu, Zhongqi

    2017-03-01

    The predator/prey (capture) problem is a prototype of many network-related applications. We study the capture process on complex networks by considering multiple predators from multiple sources. In our model, some lions start from multiple sources simultaneously to capture the lamb by biased random walks, which are controlled with a free parameter $\\alpha$. We derive the distribution of the lamb's lifetime and the expected lifetime $\\left\\langle T\\right\\rangle $. Through simulation, we find that the expected lifetime drops substantially with the increasing number of lions. We also study how the underlying topological structure affects the capture process, and obtain that locating on small-degree nodes is better than large-degree nodes to prolong the lifetime of the lamb. Moreover, dense or homogeneous network structures are against the survival of the lamb.

  6. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  7. An efficient dictionary learning algorithm and its application to 3-D medical image denoising.

    PubMed

    Li, Shutao; Fang, Leyuan; Yin, Haitao

    2012-02-01

    In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE

  8. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  9. Optimising the application of multiple-capture traps for invasive species management using spatial simulation.

    PubMed

    Warburton, Bruce; Gormley, Andrew M

    2015-01-01

    Internationally, invasive vertebrate species pose a significant threat to biodiversity, agricultural production and human health. To manage these species a wide range of tools, including traps, are used. In New Zealand, brushtail possums (Trichosurus vulpecula), stoats (Mustela ermine), and ship rats (Rattus rattus) are invasive and there is an ongoing demand for cost-effective non-toxic methods for controlling these pests. Recently, traps with multiple-capture capability have been developed which, because they do not require regular operator-checking, are purported to be more cost-effective than traditional single-capture traps. However, when pest populations are being maintained at low densities (as is typical of orchestrated pest management programmes) it remains uncertain if it is more cost-effective to use fewer multiple-capture traps or more single-capture traps. To address this uncertainty, we used an individual-based spatially explicit modelling approach to determine the likely maximum animal-captures per trap, given stated pest densities and defined times traps are left between checks. In the simulation, single- or multiple-capture traps were spaced according to best practice pest-control guidelines. For possums with maintenance densities set at the lowest level (i.e. 0.5/ha), 98% of all simulated possums were captured with only a single capacity trap set at each site. When possum density was increased to moderate levels of 3/ha, having a capacity of three captures per trap caught 97% of all simulated possums. Results were similar for stoats, although only two potential captures per site were sufficient to capture 99% of simulated stoats. For rats, which were simulated at their typically higher densities, even a six-capture capacity per trap site only resulted in 80% kill. Depending on target species, prevailing density and extent of immigration, the most cost-effective strategy for pest control in New Zealand might be to deploy several single-capture traps rather than investing in fewer, but more expense, multiple-capture traps.

  10. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    PubMed Central

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  11. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  12. Light scattering and transmission measurement using digital imaging for online analysis of constituents in milk

    NASA Astrophysics Data System (ADS)

    Jain, Pranay; Sarma, Sanjay E.

    2015-05-01

    Milk is an emulsion of fat globules and casein micelles dispersed in an aqueous medium with dissolved lactose, whey proteins and minerals. Quantification of constituents in milk is important in various stages of the dairy supply chain for proper process control and quality assurance. In field-level applications, spectrophotometric analysis is an economical option due to the low-cost of silicon photodetectors, sensitive to UV/Vis radiation with wavelengths between 300 - 1100 nm. Both absorption and scattering are witnessed as incident UV/Vis radiation interacts with dissolved and dispersed constituents in milk. These effects can in turn be used to characterize the chemical and physical composition of a milk sample. However, in order to simplify analysis, most existing instrument require dilution of samples to avoid effects of multiple scattering. The sample preparation steps are usually expensive, prone to human errors and unsuitable for field-level and online analysis. This paper introduces a novel digital imaging based method of online spectrophotometric measurements on raw milk without any sample preparation. Multiple LEDs of different emission spectra are used as discrete light sources and a digital CMOS camera is used as an image sensor. The extinction characteristic of samples is derived from captured images. The dependence of multiple scattering on power of incident radiation is exploited to quantify scattering. The method has been validated with experiments for response with varying fat concentrations and fat globule sizes. Despite of the presence of multiple scattering, the method is able to unequivocally quantify extinction of incident radiation and relate it to the fat concentrations and globule sizes of samples.

  13. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  14. Intelligent image capture of cartridge cases for firearms examiners

    NASA Astrophysics Data System (ADS)

    Jones, Brett C.; Guerci, Joseph R.

    1997-02-01

    The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.

  15. Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.

    PubMed

    Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf

    2016-01-01

    One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.

  16. Evaluation of High Dynamic Range Photography as a Luminance Mapping Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inanici, Mehlika; Galvin, Jim

    2004-12-30

    The potential, limitations, and applicability of the High Dynamic Range (HDR) photography technique is evaluated as a luminance mapping tool. Multiple exposure photographs of static scenes are taken with a Nikon 5400 digital camera to capture the wide luminance variation within the scenes. The camera response function is computationally derived using the Photosphere software, and is used to fuse the multiple photographs into HDR images. The vignetting effect and point spread function of the camera and lens system is determined. Laboratory and field studies have shown that the pixel values in the HDR photographs can correspond to the physical quantitymore » of luminance with reasonable precision and repeatability.« less

  17. Real-time hyperspectral fluorescence imaging of pancreatic β-cell dynamics with the image mapping spectrometer

    PubMed Central

    Elliott, Amicia D.; Gao, Liang; Ustione, Alessandro; Bedard, Noah; Kester, Robert; Piston, David W.; Tkaczyk, Tomasz S.

    2012-01-01

    Summary The development of multi-colored fluorescent proteins, nanocrystals and organic fluorophores, along with the resulting engineered biosensors, has revolutionized the study of protein localization and dynamics in living cells. Hyperspectral imaging has proven to be a useful approach for such studies, but this technique is often limited by low signal and insufficient temporal resolution. Here, we present an implementation of a snapshot hyperspectral imaging device, the image mapping spectrometer (IMS), which acquires full spectral information simultaneously from each pixel in the field without scanning. The IMS is capable of real-time signal capture from multiple fluorophores with high collection efficiency (∼65%) and image acquisition rate (up to 7.2 fps). To demonstrate the capabilities of the IMS in cellular applications, we have combined fluorescent protein (FP)-FRET and [Ca2+]i biosensors to measure simultaneously intracellular cAMP and [Ca2+]i signaling in pancreatic β-cells. Additionally, we have compared quantitatively the IMS detection efficiency with a laser-scanning confocal microscope. PMID:22854044

  18. Neutron Capture Experiments Using the DANCE Array at Los Alamos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dashdorj, D.; MonAme Scientific Research Center, Ulaanbaatar; Mitchell, G. E.

    2009-03-31

    The Detector for Advanced Neutron Capture Experiments (DANCE) is designed for neutron capture measurements on very small and/or radioactive targets. The DANCE array of 160 BaF{sub 2} scintillation detectors is located at the Lujan Center at the Los Alamos Neutron Science Center (LANSCE). Accurate measurements of neutron capture data are important for many current applications as well as for basic understanding of neutron capture. The gamma rays following neutron capture reactions have been studied by the time-of-flight technique using the DANCE array. The high granularity of the array allows measurements of the gamma-ray multiplicity. The gamma-ray multiplicities and energy spectramore » for different multiplicities can be measured and analyzed for spin and parity determination of the resolved resonances.« less

  19. Automated Adaptive Brightness in Wireless Capsule Endoscopy Using Image Segmentation and Sigmoid Function.

    PubMed

    Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A

    2016-08-01

    Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.

  20. Multiobject relative fuzzy connectedness and its implications in image segmentation

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Saha, Punam K.

    2001-07-01

    The notion of fuzzy connectedness captures the idea of hanging-togetherness of image elements in an object by assigning a strength of connectedness to every possible path between every possible pair of image elements. This concept leads to powerful image segmentation algorithms based on dynamic programming whose effectiveness has been demonstrated on 1000s of images in a variety of applications. In a previous framework, we introduced the notion of relative fuzzy connectedness for separating a foreground object from a background object. In this framework, an image element c is considered to belong to that among these two objects with respect to whose reference image element c has the higher strength of connectedness. In fuzzy connectedness, a local fuzzy reflation called affinity is used on the image domain. This relation was required for theoretical reasons to be of fixed form in the previous framework. In the present paper, we generalize relative connectedness to multiple objects, allowing all objects (of importance) to compete among themselves to grab membership of image elements based on their relative strength of connectedness to reference elements. We also allow affinity to be tailored to the individual objects. We present a theoretical and algorithmic framework and demonstrate that the objects defined are independent of the reference elements chosen as long as they are not in the fuzzy boundary between objects. Examples from medical imaging are presented to illustrate visually the effectiveness of multiple object relative fuzzy connectedness. A quantitative evaluation based on 160 mathematical phantom images demonstrates objectively the effectiveness of relative fuzzy connectedness with object- tailored affinity relation.

  1. 3D-Holoscopic Imaging: A New Dimension to Enhance Imaging in Minimally Invasive Therapy in Urologic Oncology

    PubMed Central

    Aggoun, Amar; Swash, Mohammad; Grange, Philippe C.R.; Challacombe, Benjamin; Dasgupta, Prokar

    2013-01-01

    Abstract Background and Purpose Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. Methods We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. Results The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. Conclusion The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging. PMID:23216303

  2. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  3. A novel tracing method for the segmentation of cell wall networks.

    PubMed

    De Vylder, Jonas; Rooms, Filip; Dhondt, Stijn; Inze, Dirk; Philips, Wilfried

    2013-01-01

    Cell wall networks are a common subject of research in biology, which are important for plant growth analysis, organ studies, etc. In order to automate the detection of individual cells in such cell wall networks, we propose a new segmentation algorithm. The proposed method is a network tracing algorithm, exploiting the prior knowledge of the network structure. The method is applicable on multiple microscopy modalities such as fluorescence, but also for images captured using non invasive microscopes such as differential interference contrast (DIC) microscopes.

  4. Early melanoma diagnosis with mobile imaging.

    PubMed

    Do, Thanh-Toan; Zhou, Yiren; Zheng, Haitian; Cheung, Ngai-Man; Koh, Dawn

    2014-01-01

    We research a mobile imaging system for early diagnosis of melanoma. Different from previous work, we focus on smartphone-captured images, and propose a detection system that runs entirely on the smartphone. Smartphone-captured images taken under loosely-controlled conditions introduce new challenges for melanoma detection, while processing performed on the smartphone is subject to computation and memory constraints. To address these challenges, we propose to localize the skin lesion by combining fast skin detection and fusion of two fast segmentation results. We propose new features to capture color variation and border irregularity which are useful for smartphone-captured images. We also propose a new feature selection criterion to select a small set of good features used in the final lightweight system. Our evaluation confirms the effectiveness of proposed algorithms and features. In addition, we present our system prototype which computes selected visual features from a user-captured skin lesion image, and analyzes them to estimate the likelihood of malignance, all on an off-the-shelf smartphone.

  5. Development of multiple-eye PIV using mirror array

    NASA Astrophysics Data System (ADS)

    Maekawa, Akiyoshi; Sakakibara, Jun

    2018-06-01

    In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of  ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .

  6. Fast Metabolic Response to Drug Intervention through Analysis on a Miniaturized, Highly Integrated Molecular Imaging System

    PubMed Central

    Wang, Jun; Hwang, Kiwook; Braas, Daniel; Dooraghi, Alex; Nathanson, David; Campbell, Dean O.; Gu, Yuchao; Sandberg, Troy; Mischel, Paul; Radu, Caius; Chatziioannou, Arion F.; Phelps, Michael E.; Christofk, Heather; Heath, James R.

    2014-01-01

    We report on a radiopharmaceutical imaging platform designed to capture the kinetics of cellular responses to drugs. Methods A portable in vitro molecular imaging system, comprised of a microchip and a beta-particle imaging camera, permits routine cell-based radioassays on small number of either suspension or adherent cells. We investigate the response kinetics of model lymphoma and glioblastoma cancer cell lines to [18F]fluorodeoxyglucose ([18F]FDG) uptake following drug exposure. Those responses are correlated with kinetic changes in the cell cycle, or with changes in receptor-tyrosine kinase signaling. Results The platform enables radioassays directly on multiple cell types, and yields results comparable to conventional approaches, but uses smaller sample sizes, permits a higher level of quantitation, and doesn’t require cell lysis. Conclusion The kinetic analysis enabled by the platform provides a rapid (~1 hour) drug screening assay. PMID:23978446

  7. Correlations of diffusion tensor imaging values and symptom scores in patients with schizophrenia.

    PubMed

    Michael, Andrew M; Calhoun, Vince D; Pearlson, Godfrey D; Baum, Stefi A; Caprihan, Arvind

    2008-01-01

    Abnormalities in white matter (WM) brain regions are attributed as a possible biomarker for schizophrenia (SZ). Diffusion tensor imaging (DTI) is used to capture WM tracts. Psychometric tests that evaluate the severity of symptoms of SZ are clinically used in the diagnosis process. In this study we investigate the correlates of scalar DTI measures, such as fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity with behavioral test scores. The correlations were found by different schemes: mean correlation with WM atlas regions and multiple regression of DTI values with test scores. The corpus callosum, superior longitudinal fasciculus right and inferior longitudinal fasciculus left were found to be having high correlations with test scores.

  8. Simultaneous multiple view high resolution surface geometry acquisition using structured light and mirrors.

    PubMed

    Basevi, Hector R A; Guggenheim, James A; Dehghani, Hamid; Styles, Iain B

    2013-03-25

    Knowledge of the surface geometry of an imaging subject is important in many applications. This information can be obtained via a number of different techniques, including time of flight imaging, photogrammetry, and fringe projection profilometry. Existing systems may have restrictions on instrument geometry, require expensive optics, or require moving parts in order to image the full surface of the subject. An inexpensive generalised fringe projection profilometry system is proposed that can account for arbitrarily placed components and use mirrors to expand the field of view. It simultaneously acquires multiple views of an imaging subject, producing a cloud of points that lie on its surface, which can then be processed to form a three dimensional model. A prototype of this system was integrated into an existing Diffuse Optical Tomography and Bioluminescence Tomography small animal imaging system and used to image objects including a mouse-shaped plastic phantom, a mouse cadaver, and a coin. A surface mesh generated from surface capture data of the mouse-shaped plastic phantom was compared with ideal surface points provided by the phantom manufacturer, and 50% of points were found to lie within 0.1mm of the surface mesh, 82% of points were found to lie within 0.2mm of the surface mesh, and 96% of points were found to lie within 0.4mm of the surface mesh.

  9. Digging into the corona: A modeling framework trained with Sun-grazing comet observations

    NASA Astrophysics Data System (ADS)

    Jia, Y. D.; Pesnell, W. D.; Bryans, P.; Downs, C.; Liu, W.; Schwartz, S. J.

    2017-12-01

    Images of comets diving into the low corona have been captured a few times in the past decade. Structures visible at various wavelengths during these encounters indicate a strong variation of the ambient conditions of the corona. We combine three numerical models: a global coronal model, a particle transportation model, and a cometary plasma interaction model into one framework to model the interaction of such Sun-grazing comets with plasma in the low corona. In our framework, cometary vapors are ionized via multiple channels and then captured by the coronal magnetic field. In seconds, these ions are further ionized into their highest charge state, which is revealed by certain coronal emission lines. Constrained by observations, we apply our framework to trace back to the local conditions of the ambient corona, and their spatial/time variation over a broad range of scales. Once trained by multiple stages of the comet's journey in the low corona, we illustrate how this framework can leverage these unique observations to probe the structure of the solar corona and solar wind.

  10. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    PubMed

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  11. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction frommore » elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.« less

  12. The effect of multispectral image fusion enhancement on human efficiency.

    PubMed

    Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M

    2017-01-01

    The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.

  13. A Novel, Real-Time, In Vivo Mouse Retinal Imaging System.

    PubMed

    Butler, Mark C; Sullivan, Jack M

    2015-11-01

    To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies.

  14. A method for the automated processing and analysis of images of ULVWF-platelet strings.

    PubMed

    Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V

    2013-01-01

    We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.

  15. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion.

    PubMed

    Yang, Y X; Teo, S-K; Van Reeth, E; Tan, C H; Tham, I W K; Poh, C L

    2015-08-01

    Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors' proposed approach. A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors' proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.

  16. Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.

    PubMed

    Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying

    2016-03-21

    Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.

  17. Parallel Wavefront Analysis for a 4D Interferometer

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2011-01-01

    This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.

  18. Multi-image acquisition-based distance sensor using agile laser spot beam.

    PubMed

    Riza, Nabeel A; Amin, M Junaid

    2014-09-01

    We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.

  19. Comparison and evaluation of datasets for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  20. What Hansel and Gretel’s Trail Teach Us about Knowledge Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne Simpson; Troy Hiltbrand

    Background At Idaho National Laboratory (INL), we are on the cusp of a significant era of change. INL is the lead Department of Energy Nuclear Research and Development Laboratory, focused on finding innovative solutions to the nation’s energy challenges. Not only has the Laboratory grown at an unprecedented rate over the last five years, but also has a significant segment of its workforce that is ready for retirement. Over the next 10 years, it is anticipated that upwards of 60% of the current workforce at INL will be eligible for retirement. Since the Laboratory is highly dependent on the intellectualmore » capabilities of its scientists and engineers and their efforts to ensure the future of the nation’s energy portfolio, this attrition of resources has the potential of seriously impacting the ability of the Laboratory to sustain itself and the growth that it has achieved in the past years. Similar to Germany in the early nineteenth century, we face the challenge of our self-identity and must find a way to solidify our legacy to propel us into the future. Approach As the Brothers Grimm set out to collect their fairy tales, they focused on gathering information from the people that were most knowledgeable in the subject. For them, it was the peasants, with their rich knowledge of the region’s sub-culture of folk lore that was passed down from generation to generation around the evening fire. As we look to capture this tacit knowledge, it is requisite that we also seek this information from those individuals that are most versed in it. In our case, it is the scientists and researchers who have dedicated their lives to providing the nation with nuclear energy. This information comes in many forms, both digital and non-digital. Some of this information still resides in the minds of these scientists and researchers who are close to retirement, or who have already retired. Once the information has been collected, it has to be sorted through to identify where the “shining stones” can be found. The quantity of this information makes it improbable for an individual or set of individuals to sort through it and pick out those ideas which are most important. To accomplish both the step of information capture and classification, modern advancements in technology give us the tools that we need to successfully capture this tacit knowledge. To assist in this process, we have evaluated multiple tools and methods that will help us to unlock the power of tacit knowledge. Tools The first challenge that stands in the way of success is the capture of information. More than 50 years of nuclear research is captured in log books, microfiche, and other non-digital formats. To transform this information from its current form into a format that can “shine,” requires a number of different tools. These tools fall into three major categories: Information Capture, Content Retrieval, and Information Classification. Information Capture The first step is to capture the information from a myriad of sources. With knowledge existing in multiple formats, this step requires multiple approaches to be successful. Some of the sources that require consideration include handwritten documents, typed documents, microfiche, images, audio and video feeds, and electronic images. To make this step feasible for a large body of knowledge requires automation.« less

  1. Fusion of multichannel local and global structural cues for photo aesthetics evaluation.

    PubMed

    Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li

    2014-03-01

    Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.

  2. Multiphoton fluorescence lifetime imaging of chemotherapy distribution in solid tumors

    NASA Astrophysics Data System (ADS)

    Carlson, Marjorie; Watson, Adrienne L.; Anderson, Leah; Largaespada, David A.; Provenzano, Paolo P.

    2017-11-01

    Doxorubicin is a commonly used chemotherapeutic employed to treat multiple human cancers, including numerous sarcomas and carcinomas. Furthermore, doxorubicin possesses strong fluorescent properties that make it an ideal reagent for modeling drug delivery by examining its distribution in cells and tissues. However, while doxorubicin fluorescence and lifetime have been imaged in live tissue, its behavior in archival samples that frequently result from drug and treatment studies in human and animal patients, and murine models of human cancer, has to date been largely unexplored. Here, we demonstrate imaging of doxorubicin intensity and lifetimes in archival formalin-fixed paraffin-embedded sections from mouse models of human cancer with multiphoton excitation and multiphoton fluorescence lifetime imaging microscopy (FLIM). Multiphoton excitation imaging reveals robust doxorubicin emission in tissue sections and captures spatial heterogeneity in cells and tissues. However, quantifying the amount of doxorubicin signal in distinct cell compartments, particularly the nucleus, often remains challenging due to strong signals in multiple compartments. The addition of FLIM analysis to display the spatial distribution of excited state lifetimes clearly distinguishes between signals in distinct compartments such as the cell nuclei versus cytoplasm and allows for quantification of doxorubicin signal in each compartment. Furthermore, we observed a shift in lifetime values in the nuclei of transformed cells versus nontransformed cells, suggesting a possible diagnostic role for doxorubicin lifetime imaging to distinguish normal versus transformed cells. Thus, data here demonstrate that multiphoton FLIM is a highly sensitive platform for imaging doxorubicin distribution in normal and diseased archival tissues.

  3. Visualizing Ebolavirus Particles Using Single-Particle Interferometric Reflectance Imaging Sensor (SP-IRIS).

    PubMed

    Carter, Erik P; Seymour, Elif Ç; Scherr, Steven M; Daaboul, George G; Freedman, David S; Selim Ünlü, M; Connor, John H

    2017-01-01

    This chapter describes an approach for the label-free imaging and quantification of intact Ebola virus (EBOV) and EBOV viruslike particles (VLPs) using a light microscopy technique. In this technique, individual virus particles are captured onto a silicon chip that has been printed with spots of virus-specific capture antibodies. These captured virions are then detected using an optical approach called interference reflectance imaging. This approach allows for the detection of each virus particle that is captured on an antibody spot and can resolve the filamentous structure of EBOV VLPs without the need for electron microscopy. Capture of VLPs and virions can be done from a variety of sample types ranging from tissue culture medium to blood. The technique also allows automated quantitative analysis of the number of virions captured. This can be used to identify the virus concentration in an unknown sample. In addition, this technique offers the opportunity to easily image virions captured from native solutions without the need for additional labeling approaches while offering a means of assessing the range of particle sizes and morphologies in a quantitative manner.

  4. Brain tissues atrophy is not always the best structural biomarker of physiological aging: A multimodal cross-sectional study.

    PubMed

    Cherubini, Andrea; Caligiuri, Maria Eugenia; Péran, Patrice; Sabatini, Umberto; Cosentino, Carlo; Amato, Francesco

    2015-01-01

    This study presents a voxel-based multiple regression analysis of different magnetic resonance image modalities, including anatomical T1-weighted, T2* relaxometry, and diffusion tensor imaging. Quantitative parameters sensitive to complementary brain tissue alterations, including morphometric atrophy, mineralization, microstructural damage, and anisotropy loss, were compared in a linear physiological aging model in 140 healthy subjects (range 20-74 years). The performance of different predictors and the identification of the best biomarker of age-induced structural variation were compared without a priori anatomical knowledge. The best quantitative predictors in several brain regions were iron deposition and microstructural damage, rather than macroscopic tissue atrophy. Age variations were best resolved with a combination of markers, suggesting that multiple predictors better capture age-induced tissue alterations. These findings highlight the importance of a combined evaluation of multimodal biomarkers for the study of aging and point to a number of novel applications for the method described.

  5. Single-Pol Synthetic Aperture Radar Terrain Classification using Multiclass Confidence for One-Class Classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Steinbach, Ryan Matthew; Moya, Mary M

    2015-10-01

    Except in the most extreme conditions, Synthetic aperture radar (SAR) is a remote sensing technology that can operate day or night. A SAR can provide surveillance over a long time period by making multiple passes over a wide area. For object-based intelligence it is convenient to segment and classify the SAR images into objects that identify various terrains and man-made structures that we call “static features.” In this paper we introduce a novel SAR image product that captures how different regions decorrelate at different rates. Using superpixels and their first two moments we develop a series of one-class classification algorithmsmore » using a goodness-of-fit metric. P-value fusion is used to combine the results from different classes. We also show how to combine multiple one-class classifiers to get a confidence about a classification. This can be used by downstream algorithms such as a conditional random field to enforce spatial constraints.« less

  6. Importance of Multimodal MRI in Characterizing Brain Tissue and Its Potential Application for Individual Age Prediction.

    PubMed

    Cherubini, Andrea; Caligiuri, Maria Eugenia; Peran, Patrice; Sabatini, Umberto; Cosentino, Carlo; Amato, Francesco

    2016-09-01

    This study presents a voxel-based multiple regression analysis of different magnetic resonance image modalities, including anatomical T1-weighted, T2(*) relaxometry, and diffusion tensor imaging. Quantitative parameters sensitive to complementary brain tissue alterations, including morphometric atrophy, mineralization, microstructural damage, and anisotropy loss, were compared in a linear physiological aging model in 140 healthy subjects (range 20-74 years). The performance of different predictors and the identification of the best biomarker of age-induced structural variation were compared without a priori anatomical knowledge. The best quantitative predictors in several brain regions were iron deposition and microstructural damage, rather than macroscopic tissue atrophy. Age variations were best resolved with a combination of markers, suggesting that multiple predictors better capture age-induced tissue alterations. The results of the linear model were used to predict apparent age in different regions of individual brain. This approach pointed to a number of novel applications that could potentially help highlighting areas particularly vulnerable to disease.

  7. Multi-National Banknote Classification Based on Visible-light Line Sensor and Convolutional Neural Network.

    PubMed

    Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung

    2017-07-08

    Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods.

  8. Multi-National Banknote Classification Based on Visible-light Line Sensor and Convolutional Neural Network

    PubMed Central

    Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung

    2017-01-01

    Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods. PMID:28698466

  9. Fluorescence endoscopy using fiber speckle illumination

    NASA Astrophysics Data System (ADS)

    Nakano, Shuhei; Katagiri, Takashi; Matsuura, Yuji

    2018-02-01

    An endoscopic fluorescence imaging system based on fiber speckle illumination is proposed. In this system, a multimode fiber for transmission of excitation laser light and collection of fluorescence is inserted into a conventional flexible endoscope. Since the excitation laser light has random speckle structure, one can detect fluorescence signal corresponding to the irradiation pattern if the sample contains fluorophores. The irradiation pattern can be captured by the endoscope camera when the excitation wavelength is within the sensitivity range of the camera. By performing multiple measurements while changing the irradiation pattern, a fluorescence image is reconstructed by solving a norm minimization problem. The principle of our method was experimentally demonstrated. A 2048 pixels image of quantum dots coated on a frosted glass was successfully reconstructed by 32 measurements. We also confirmed that our method can be applied on biological tissues.

  10. Multidepth imaging by chromatic dispersion confocal microscopy

    NASA Astrophysics Data System (ADS)

    Olsovsky, Cory A.; Shelton, Ryan L.; Saldua, Meagan A.; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.

    2012-03-01

    Confocal microscopy has shown potential as an imaging technique to detect precancer. Imaging cellular features throughout the depth of epithelial tissue may provide useful information for diagnosis. However, the current in vivo axial scanning techniques for confocal microscopy are cumbersome, time-consuming, and restrictive when attempting to reconstruct volumetric images acquired in breathing patients. Chromatic dispersion confocal microscopy (CDCM) exploits severe longitudinal chromatic aberration in the system to axially disperse light from a broadband source and, ultimately, spectrally encode high resolution images along the depth of the object. Hyperchromat lenses are designed to have severe and linear longitudinal chromatic aberration, but have not yet been used in confocal microscopy. We use a hyperchromat lens in a stage scanning confocal microscope to demonstrate the capability to simultaneously capture information at multiple depths without mechanical scanning. A photonic crystal fiber pumped with a 830nm wavelength Ti:Sapphire laser was used as a supercontinuum source, and a spectrometer was used as the detector. The chromatic aberration and magnification in the system give a focal shift of 140μm after the objective lens and an axial resolution of 5.2-7.6μm over the wavelength range from 585nm to 830nm. A 400x400x140μm3 volume of pig cheek epithelium was imaged in a single X-Y scan. Nuclei can be seen at several depths within the epithelium. The capability of this technique to achieve simultaneous high resolution confocal imaging at multiple depths may reduce imaging time and motion artifacts and enable volumetric reconstruction of in vivo confocal images of the epithelium.

  11. Instant replay.

    PubMed

    Rosenthal, David I

    2013-06-01

    With widespread adoption of electronic health records (EHRs) and electronic clinical documentation, health care organizations now have greater faculty to review clinical data and evaluate the efficacy of quality improvement efforts. Unfortunately, I believe there is a fundamental gap between actual health care delivery and what we document in the current EHR systems. This process of capturing the patient encounter, which I'll refer to as transcription, is prone to significant data loss due to inadequate methods of data capture, multiple points of view, and bias and subjectivity in the transcriptional process. Our current EHR, text-based clinical documentation systems are lossy abstractions - one sided accounts of what take place between patients and providers. Our clinical notes contain the breadcrumbs of relationships, conversations, physical exams, and procedures but often lack the ability to capture the form, the emotions, the images, the nonverbal communication, and the actual narrative of interactions between human beings. I believe that a video record, in conjunction with objective transcriptional services and other forms of data capture, may provide a closer approximation to the truth of health care delivery and may be a valuable tool for healthcare improvement. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. A large-scale solar dynamics observatory image dataset for computer vision applications.

    PubMed

    Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A

    2017-01-01

    The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.

  13. A system architecture for sharing de-identified, research-ready brain scans and health information across clinical imaging centers.

    PubMed

    Chervenak, Ann L; van Erp, Theo G M; Kesselman, Carl; D'Arcy, Mike; Sobell, Janet; Keator, David; Dahm, Lisa; Murry, Jim; Law, Meng; Hasso, Anton; Ames, Joseph; Macciardi, Fabio; Potkin, Steven G

    2012-01-01

    Progress in our understanding of brain disorders increasingly relies on the costly collection of large standardized brain magnetic resonance imaging (MRI) data sets. Moreover, the clinical interpretation of brain scans benefits from compare and contrast analyses of scans from patients with similar, and sometimes rare, demographic, diagnostic, and treatment status. A solution to both needs is to acquire standardized, research-ready clinical brain scans and to build the information technology infrastructure to share such scans, along with other pertinent information, across hospitals. This paper describes the design, deployment, and operation of a federated imaging system that captures and shares standardized, de-identified clinical brain images in a federation across multiple institutions. In addition to describing innovative aspects of the system architecture and our initial testing of the deployed infrastructure, we also describe the Standardized Imaging Protocol (SIP) developed for the project and our interactions with the Institutional Review Board (IRB) regarding handling patient data in the federated environment.

  14. A System Architecture for Sharing De-Identified, Research-Ready Brain Scans and Health Information Across Clinical Imaging Centers

    PubMed Central

    Chervenak, Ann L.; van Erp, Theo G.M.; Kesselman, Carl; D’Arcy, Mike; Sobell, Janet; Keator, David; Dahm, Lisa; Murry, Jim; Law, Meng; Hasso, Anton; Ames, Joseph; Macciardi, Fabio; Potkin, Steven G.

    2015-01-01

    Progress in our understanding of brain disorders increasingly relies on the costly collection of large standardized brain magnetic resonance imaging (MRI) data sets. Moreover, the clinical interpretation of brain scans benefits from compare and contrast analyses of scans from patients with similar, and sometimes rare, demographic, diagnostic, and treatment status. A solution to both needs is to acquire standardized, research-ready clinical brain scans and to build the information technology infrastructure to share such scans, along with other pertinent information, across hospitals. This paper describes the design, deployment, and operation of a federated imaging system that captures and shares standardized, de-identified clinical brain images in a federation across multiple institutions. In addition to describing innovative aspects of the system architecture and our initial testing of the deployed infrastructure, we also describe the Standardized Imaging Protocol (SIP) developed for the project and our interactions with the Institutional Review Board (IRB) regarding handling patient data in the federated environment. PMID:22941984

  15. Boundary segmentation for fluorescence microscopy using steerable filters

    NASA Astrophysics Data System (ADS)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  16. Quantitative Imaging with a Mobile Phone Microscope

    PubMed Central

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  17. High-resolution, high-throughput imaging with a multibeam scanning electron microscope

    PubMed Central

    EBERLE, AL; MIKULA, S; SCHALEK, R; LICHTMAN, J; TATE, ML KNOTHE; ZEIDLER, D

    2015-01-01

    Electron–electron interactions and detector bandwidth limit the maximal imaging speed of single-beam scanning electron microscopes. We use multiple electron beams in a single column and detect secondary electrons in parallel to increase the imaging speed by close to two orders of magnitude and demonstrate imaging for a variety of samples ranging from biological brain tissue to semiconductor wafers. Lay Description The composition of our world and our bodies on the very small scale has always fascinated people, making them search for ways to make this visible to the human eye. Where light microscopes reach their resolution limit at a certain magnification, electron microscopes can go beyond. But their capability of visualizing extremely small features comes at the cost of a very small field of view. Some of the questions researchers seek to answer today deal with the ultrafine structure of brains, bones or computer chips. Capturing these objects with electron microscopes takes a lot of time – maybe even exceeding the time span of a human being – or new tools that do the job much faster. A new type of scanning electron microscope scans with 61 electron beams in parallel, acquiring 61 adjacent images of the sample at the same time a conventional scanning electron microscope captures one of these images. In principle, the multibeam scanning electron microscope’s field of view is 61 times larger and therefore coverage of the sample surface can be accomplished in less time. This enables researchers to think about large-scale projects, for example in the rather new field of connectomics. A very good introduction to imaging a brain at nanometre resolution can be found within course material from Harvard University on http://www.mcb80x.org/# as featured media entitled ‘connectomics’. PMID:25627873

  18. NASA SOFIA Captures Images of the Planetary Nebula M2-9

    NASA Image and Video Library

    2012-03-29

    Researchers using NASA Stratospheric Observatory for Infrared Astronomy SOFIA have captured infrared images of the last exhalations of a dying sun-like star. This image is of the planetary Nebula M2-9.

  19. Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging.

    PubMed

    Jiang, Shaowei; Liao, Jun; Bian, Zichao; Guo, Kaikai; Zhang, Yongbing; Zheng, Guoan

    2018-04-01

    A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.

  20. High-resolution ophthalmic imaging system

    DOEpatents

    Olivier, Scot S.; Carrano, Carmen J.

    2007-12-04

    A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.

  1. ASTER Captures New Image of Pakistan Flooding

    NASA Image and Video Library

    2010-08-20

    NASA Terra spacecraft captured this cloud-free image over the city of Sukkur, Pakistan, on Aug. 18, 2010. Sukkur, located in southeastern Pakistan Sindh Province, is visible as the grey, urbanized area in the lower left center of the image.

  2. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  3. D3: A Collaborative Infrastructure for Aerospace Design

    NASA Technical Reports Server (NTRS)

    Walton, Joan; Filman, Robert E.; Knight, Chris; Korsmeyer, David J.; Lee, Diana D.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid dynamics) model executions. DARWIN captures, stores and indexes data, manages derived knowledge (such as visualizations across multiple data sets) and provides an environment for designers to collaborate in the analysis of the results of testing. DARWIN is an interesting application because it supports high volumes of data, integrates multiple modalities of data display (e.g. images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and view of data.

  4. Video repairing under variable illumination using cyclic motions.

    PubMed

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  5. Expansion of the visual angle of a car rear-view image via an image mosaic algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng

    2015-05-01

    The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.

  6. Transformation optics with windows

    NASA Astrophysics Data System (ADS)

    Oxburgh, Stephen; White, Chris D.; Antoniou, Georgios; Orife, Ejovbokoghene; Courtial, Johannes

    2014-09-01

    Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.

  7. Ensemble Clustering using Semidefinite Programming with Applications

    PubMed Central

    Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui

    2011-01-01

    In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0–1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain. PMID:21927539

  8. Ensemble Clustering using Semidefinite Programming with Applications.

    PubMed

    Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui

    2010-05-01

    In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0-1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain.

  9. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.

  10. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  11. Near-Infrared Coloring via a Contrast-Preserving Mapping Model.

    PubMed

    Chang-Hwan Son; Xiao-Ping Zhang

    2017-11-01

    Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.

  12. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  13. Getting the Bigger Picture With Digital Surveillance

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.

  14. NASA Spacecraft Captures Image of Brazil Flooding

    NASA Image and Video Library

    2011-01-19

    On Jan. 18, 2011, NASA Terra spacecraft captured this 3-D perspective image of the city of Nova Friburgo, Brazil. A week of torrential rains triggered a series of deadly mudslides and floods. More details about this image at the Photojournal.

  15. Video Image Tracking Engine

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)

    2004-01-01

    A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).

  16. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. X.; Van Reeth, E.; Poh, C. L., E-mail: clpoh@ntu.edu.sg

    2015-08-15

    Purpose: Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors’ proposed approach. Methods: A novel hybrid approach based on deformable image registration (DIR) and finite elementmore » method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. Results: The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors’ proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. Conclusions: The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.« less

  17. A Novel, Real-Time, In Vivo Mouse Retinal Imaging System

    PubMed Central

    Butler, Mark C.; Sullivan, Jack M.

    2015-01-01

    Purpose To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Methods Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. Results The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. Conclusions A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies. PMID:26551329

  18. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    PubMed

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  19. A Low Power, Parallel Wearable Multi-Sensor System for Human Activity Evaluation.

    PubMed

    Li, Yuecheng; Jia, Wenyan; Yu, Tianjian; Luan, Bo; Mao, Zhi-Hong; Zhang, Hong; Sun, Mingui

    2015-04-01

    In this paper, the design of a low power heterogeneous wearable multi-sensor system, built with Zynq System-on-Chip (SoC), for human activity evaluation is presented. The powerful data processing capability and flexibility of this SoC represent significant improvements over our previous ARM based system designs. The new system captures and compresses multiple color images and sensor data simultaneously. Several strategies are adopted to minimize power consumption. Our wearable system provides a new tool for the evaluation of human activity, including diet, physical activity and lifestyle.

  20. Water surface capturing by image processing

    USDA-ARS?s Scientific Manuscript database

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  1. Ultrasound image texture processing for evaluating fatty liver in peripartal dairy cows

    NASA Astrophysics Data System (ADS)

    Amin, Viren R.; Bobe, Gerd; Young, Jerry; Ametaj, Burim; Beitz, Donald

    2001-07-01

    The objective of this work is to characterize the liver ultrasound texture as it changes in diffuse disease of fatty liver. This technology could allow non-invasive diagnosis of fatty liver, a major metabolic disorder in early lactation dairy cows. More than 100 liver biopsies were taken from fourteen dairy cows, as a part of the USDA-funded study for effects of glucagon on prevention and treatment of fatty liver. Up to nine liver biopsies were taken from each cow during peripartal period of seven weeks and total lipid content was determined chemically. Just before each liver biopsy was taken, ultrasonic B-mode images were digitally captured using a 3.5 or 5 MHz transducer. Effort was made to capture images that were non-blurred, void of large blood vessels and multiple echoes, and of consistent texture. From each image, a region-of-interest of size 100-by-100 pixels was processed. Texture parameters were calculated using algorithms such as first and second order statistics, 2D Fourier transformation, co-occurrence matrix, and gradient analysis. Many cows had normal liver (3% to 6% total lipid) and a few had developed fatty liver with total lipid up to 15%. The selected texture parameters showed consistent change with changing lipid content and could potentially be used to diagnose early fatty liver non-invasively. The approach of texture analysis algorithms and initial results on their potential in evaluating total lipid percentage is presented here.

  2. SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING

    PubMed Central

    Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin

    2018-01-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594

  3. An improved level set method for brain MR images segmentation and bias correction.

    PubMed

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  4. Localizing text in scene images by boundary clustering, stroke segmentation, and string fragment classification.

    PubMed

    Yi, Chucai; Tian, Yingli

    2012-09-01

    In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.

  5. Discriminative Multi-View Interactive Image Re-Ranking.

    PubMed

    Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng

    2017-07-01

    Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.

  6. Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.

    PubMed

    Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin

    2017-07-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.

  7. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    PubMed

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  8. Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints

    NASA Astrophysics Data System (ADS)

    Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.

    2018-05-01

    Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  9. Transmission Geometry Laser Ablation into a Non-Contact Liquid Vortex Capture Probe for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikova, Olga S; Bhandari, Deepak; Lorenz, Matthias

    2014-01-01

    RATIONALE: Capture of material from a laser ablation plume into a continuous flow stream of solvent provides the means for uninterrupted sampling, transport and ionization of collected material for coupling with mass spectral analysis. Reported here is the use of vertically aligned transmission geometry laser ablation in combination with a new non-contact liquid vortex capture probe coupled with electrospray ionization for spot sampling and chemical imaging with mass spectrometry. Methods: A vertically aligned continuous flow liquid vortex capture probe was positioned directly underneath a sample surface in a transmission geometry laser ablation (355 nm, 10 Hz, 7 ns pulse width)more » setup to capture into solution the ablated material. The outlet of the vortex probe was coupled to the Turbo V ion source of an AB SCIEX TripleTOF 5600+ mass spectrometer. System operation and performance metrics were tested using inked patterns and thin tissue sections. Glass slides and slides designed especially for laser capture microdissection, viz., DIRECTOR slides and PEN 1.0 (polyethylene naphthalate) membrane slides, were used as sample substrates. Results: The estimated capture efficiency of laser ablated material was 24%, which was enabled by the use of a probe with large liquid surface area (~ 2.8 mm2) and with gravity to help direct ablated material vertically down towards the probe. The swirling vortex action of the liquid surface potentially enhanced capture and dissolution of not only particulates, but also gaseous products of the laser ablation. The use of DIRECTOR slides and PEN 1.0 (polyethylene naphthalate) membrane slides as sample substrates enabled effective ablation of a wide range of sample types (basic blue 7, polypropylene glycol, insulin and cyctochrome c) without photodamage using a UV laser. Imaging resolution of about 6 m was demonstrated for stamped ink on DIRECTOR slides based on the ability to distinguish features present both in the optical and in the chemical image. This imaging resolution was 20 times better than the previous best reported results with laser ablation/liquid sample capture mass spectrometry imaging. Using thin sections of brain tissue the chemical image of a selected lipid was obtained with an estimated imaging resolution of about 50 um. Conclusions: A vertically aligned, transmission geometry laser ablation liquid vortex capture probe, electrospray ionization mass spectrometry system provides an effective means for spatially resolved spot sampling and imaging with mass spectrometry.« less

  10. Transmission geometry laser ablation into a non-contact liquid vortex capture probe for mass spectrometry imaging.

    PubMed

    Ovchinnikova, Olga S; Bhandari, Deepak; Lorenz, Matthias; Van Berkel, Gary J

    2014-08-15

    Capture of material from a laser ablation plume into a continuous flow stream of solvent provides the means for uninterrupted sampling, transport and ionization of collected material for coupling with mass spectral analysis. Reported here is the use of vertically aligned transmission geometry laser ablation in combination with a new non-contact liquid vortex capture probe coupled with electrospray ionization for spot sampling and chemical imaging with mass spectrometry. A vertically aligned continuous flow liquid vortex capture probe was positioned directly underneath a sample surface in a transmission geometry laser ablation (355 nm, 10 Hz, 7 ns pulse width) set up to capture into solution the ablated material. The outlet of the vortex probe was coupled to the Turbo V™ ion source of an AB SCIEX TripleTOF 5600+ mass spectrometer. System operation and performance metrics were tested using inked patterns and thin tissue sections. Glass slides and slides designed especially for laser capture microdissection, viz., DIRECTOR(®) slides and PEN 1.0 (polyethylene naphthalate) membrane slides, were used as sample substrates. The estimated capture efficiency of laser-ablated material was 24%, which was enabled by the use of a probe with large liquid surface area (~2.8 mm(2) ) and with gravity to help direct ablated material vertically down towards the probe. The swirling vortex action of the liquid surface potentially enhanced capture and dissolution not only of particulates, but also of gaseous products of the laser ablation. The use of DIRECTOR(®) slides and PEN 1.0 (polyethylene naphthalate) membrane slides as sample substrates enabled effective ablation of a wide range of sample types (basic blue 7, polypropylene glycol, insulin and cyctochrome c) without photodamage using a UV laser. Imaging resolution of about 6 µm was demonstrated for stamped ink on DIRECTOR(®) slides based on the ability to distinguish features present both in the optical and in the chemical image. This imaging resolution was 20 times better than the previous best reported results with laser ablation/liquid sample capture mass spectrometry imaging. Using thin sections of brain tissue the chemical image of a selected lipid was obtained with an estimated imaging resolution of about 50 µm. A vertically aligned, transmission geometry laser ablation liquid vortex capture probe, electrospray ionization mass spectrometry system provides an effective means for spatially resolved spot sampling and imaging with mass spectrometry. Published in 2014. This article is a U.S. Government work and is in the public domain in the USA.

  11. Reducing flicker due to ambient illumination in camera captured images

    NASA Astrophysics Data System (ADS)

    Kim, Minwoong; Bengtson, Kurt; Li, Lisa; Allebach, Jan P.

    2013-02-01

    The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.

  12. Cell classification using big data analytics plus time stretch imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jalali, Bahram; Chen, Claire L.; Mahjoubfar, Ata

    2016-09-01

    We show that blood cells can be classified with high accuracy and high throughput by combining machine learning with time stretch quantitative phase imaging. Our diagnostic system captures quantitative phase images in a flow microscope at millions of frames per second and extracts multiple biophysical features from individual cells including morphological characteristics, light absorption and scattering parameters, and protein concentration. These parameters form a hyperdimensional feature space in which supervised learning and cell classification is performed. We show binary classification of T-cells against colon cancer cells, as well classification of algae cell strains with high and low lipid content. The label-free screening averts the negative impact of staining reagents on cellular viability or cell signaling. The combination of time stretch machine vision and learning offers unprecedented cell analysis capabilities for cancer diagnostics, drug development and liquid biopsy for personalized genomics.

  13. Imaging Mercury's Polar Deposits during MESSENGER's Low-altitude Campaign.

    PubMed

    Chabot, Nancy L; Ernst, Carolyn M; Paige, David A; Nair, Hari; Denevi, Brett W; Blewett, David T; Murchie, Scott L; Deutsch, Ariel N; Head, James W; Solomon, Sean C

    2016-09-28

    Images obtained during MESSENGER's low-altitude campaign in the final year of the mission provide the highest-spatial-resolution views of Mercury's polar deposits. Images for distinct areas of permanent shadow within 35 north polar craters were successfully captured during the campaign. All of these regions of permanent shadow were found to have low-reflectance surfaces with well-defined boundaries. Additionally, brightness variations across the deposits correlate with variations in the biannual maximum surface temperature across the permanently shadowed regions, supporting the conclusion that multiple volatile organic compounds are contained in Mercury's polar deposits, in addition to water ice. A recent large impact event or ongoing bombardment by micrometeoroids could deliver water as well as many volatile organic compounds to Mercury. Either scenario is consistent with the distinctive reflectance properties and well-defined boundaries of Mercury's polar deposits and the presence of volatiles in all available cold traps.

  14. A Bayesian Model of the Memory Colour Effect.

    PubMed

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.

  15. A Bayesian Model of the Memory Colour Effect

    PubMed Central

    Olkkonen, Maria; Gegenfurtner, Karl R.

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874

  16. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  17. Accelerated x-ray scatter projection imaging using multiple continuously moving pencil beams

    NASA Astrophysics Data System (ADS)

    Dydula, Christopher; Belev, George; Johns, Paul C.

    2017-03-01

    Coherent x-ray scatter varies with angle and photon energy in a manner dependent on the chemical composition of the scattering material, even for amorphous materials. Therefore, images generated from scattered photons can have much higher contrast than conventional projection radiographs. We are developing a scatter projection imaging prototype at the BioMedical Imaging and Therapy (BMIT) facility of the Canadian Light Source (CLS) synchrotron in Saskatoon, Canada. The best images are obtained using step-and-shoot scanning with a single pencil beam and area detector to capture sequentially the scatter pattern for each primary beam location on the sample. Primary x-ray transmission is recorded simultaneously using photodiodes. The technological challenge is to acquire the scatter data in a reasonable time. Using multiple pencil beams producing partially-overlapping scatter patterns reduces acquisition time but increases complexity due to the need for a disentangling algorithm to extract the data. Continuous sample motion, rather than step-and-shoot, also reduces acquisition time at the expense of introducing motion blur. With a five-beam (33.2 keV, 3.5 mm2 beam area) continuous sample motion configuration, a rectangular array of 12 x 100 pixels with 1 mm sampling width has been acquired in 0.4 minutes (3000 pixels per minute). The acquisition speed is 38 times the speed for single beam step-and-shoot. A system model has been developed to calculate detected scatter patterns given the material composition of the object to be imaged. Our prototype development, image acquisition of a plastic phantom and modelling are described.

  18. Knowledge of healthcare professionals about rights of patient’s images

    PubMed Central

    Caires, Bianca Rodrigues; Lopes, Maria Carolina Barbosa Teixeira; Okuno, Meiry Fernanda Pinto; Vancini-Campanharo, Cássia Regina; Batista, Ruth Ester Assayag

    2015-01-01

    Objective To assess knowledge of healthcare professionals about capture and reproduction of images of patients in a hospital setting. Methods A cross-sectional and observational study among 360 healthcare professionals (nursing staff, physical therapists, and physicians), working at a teaching hospital in the city of São Paulo (SP). A questionnaire with sociodemographic information was distributed and data were correlated to capture and reproduction of images at hospitals. Results Of the 360 respondents, 142 had captured images of patients in the last year, and 312 reported seeing other professionals taking photographs of patients. Of the participants who captured images, 61 said they used them for studies and presentation of clinical cases, and 168 professionals reported not knowing of any legislation in the Brazilian Penal Code regarding collection and use of images. Conclusion There is a gap in the training of healthcare professionals regarding the use of patient´s images. It is necessary to include subjects that address this theme in the syllabus of undergraduate courses, and the healthcare organizations should regulate this issue. PMID:26267838

  19. Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation

    NASA Astrophysics Data System (ADS)

    Inamoto, Naho; Saito, Hideo

    2003-06-01

    This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..

  20. Image analysis of corrosion pit initiation on ASTM type A240 stainless steel and ASTM type A 1008 carbon steel

    NASA Astrophysics Data System (ADS)

    Nine, H. M. Zulker

    The adversity of metallic corrosion is of growing concern to industrial engineers and scientists. Corrosion attacks metal surface and causes structural as well as direct and indirect economic losses. Multiple corrosion monitoring tools are available although those are time-consuming and costly. Due to the availability of image capturing devices in today's world, image based corrosion control technique is a unique innovation. By setting up stainless steel SS 304 and low carbon steel QD 1008 panels in distilled water, half-saturated sodium chloride and saturated sodium chloride solutions and subsequent RGB image analysis in Matlab, in this research, a simple and cost-effective corrosion measurement tool has identified and investigated. Additionally, the open circuit potential and electrochemical impedance spectroscopy results have been compared with RGB analysis to gratify the corrosion. Additionally, to understand the importance of ambiguity in crisis communication, the communication process between Union Carbide and Indian Government regarding the Bhopal incident in 1984 was analyzed.

  1. A novel 3D imaging system for strawberry phenotyping.

    PubMed

    He, Joe Q; Harrison, Richard J; Li, Bo

    2017-01-01

    Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. A low cost multi-view stereo (MVS) imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique.

  2. Real-time three-dimensional ultrasound-assisted axillary plexus block defines soft tissue planes.

    PubMed

    Clendenen, Steven R; Riutort, Kevin; Ladlie, Beth L; Robards, Christopher; Franco, Carlo D; Greengrass, Roy A

    2009-04-01

    Two-dimensional (2D) ultrasound is commonly used for regional block of the axillary brachial plexus. In this technical case report, we described a real-time three-dimensional (3D) ultrasound-guided axillary block. The difference between 2D and 3D ultrasound is similar to the difference between plain radiograph and computer tomography. Unlike 2D ultrasound that captures a planar image, 3D ultrasound technology acquires a 3D volume of information that enables multiple planes of view by manipulating the image without movement of the ultrasound probe. Observation of the brachial plexus in cross-section demonstrated distinct linear hyperechoic tissue structures (loose connective tissue) that initially inhibited the flow of the local anesthesia. After completion of the injection, we were able to visualize the influence of arterial pulsation on the spread of the local anesthesia. Possible advantages of this novel technology over current 2D methods are wider image volume and the capability to manipulate the planes of the image without moving the probe.

  3. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  4. Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing

    DTIC Science & Technology

    2014-06-01

    price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure

  5. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    PubMed

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  6. Dense image matching of terrestrial imagery for deriving high-resolution topographic properties of vegetation locations in alpine terrain

    NASA Astrophysics Data System (ADS)

    Niederheiser, R.; Rutzinger, M.; Bremer, M.; Wichmann, V.

    2018-04-01

    The investigation of changes in spatial patterns of vegetation and identification of potential micro-refugia requires detailed topographic and terrain information. However, mapping alpine topography at very detailed scales is challenging due to limited accessibility of sites. Close-range sensing by photogrammetric dense matching approaches based on terrestrial images captured with hand-held cameras offers a light-weight and low-cost solution to retrieve high-resolution measurements even in steep terrain and at locations, which are difficult to access. We propose a novel approach for rapid capturing of terrestrial images and a highly automated processing chain for retrieving detailed dense point clouds for topographic modelling. For this study, we modelled 249 plot locations. For the analysis of vegetation distribution and location properties, topographic parameters, such as slope, aspect, and potential solar irradiation were derived by applying a multi-scale approach utilizing voxel grids and spherical neighbourhoods. The result is a micro-topography archive of 249 alpine locations that includes topographic parameters at multiple scales ready for biogeomorphological analysis. Compared with regional elevation models at larger scales and traditional 2D gridding approaches to create elevation models, we employ analyses in a fully 3D environment that yield much more detailed insights into interrelations between topographic parameters, such as potential solar irradiation, surface area, aspect and roughness.

  7. An automated calibration method for non-see-through head mounted displays.

    PubMed

    Gilson, Stuart J; Fitzgibbon, Andrew W; Glennerster, Andrew

    2011-08-15

    Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Russian Character Recognition using Self-Organizing Map

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.

    2017-01-01

    The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%

  9. Multi-exposure high dynamic range image synthesis with camera shake correction

    NASA Astrophysics Data System (ADS)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  10. Investigation of sparsity metrics for autofocusing in digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Fan, Xin; Healy, John J.; Hennelly, Bryan M.

    2017-05-01

    Digital holographic microscopy (DHM) is an optoelectronic technique that is made up of two parts: (i) the recording of the interference pattern of the diffraction pattern of an object and a known reference wavefield using a digital camera and (ii) the numerical reconstruction of the complex object wavefield using the recorded interferogram and a distance parameter as input. The latter is based on the simulation of optical propagation from the camera plane to a plane at any arbitrary distance from the camera. A key advantage of DHM over conventional microscopy is that both the phase and intensity information of the object can be recovered at any distance, using only one capture, and this facilitates the recording of scenes that may change dynamically and that may otherwise go in and out of focus. Autofocusing using traditional microscopy requires mechanical movement of the translation stage or the microscope objective, and multiple image captures that are then compared using some metric. Autofocusing in DHM is similar, except that the sequence of intensity images, to which the metric is applied, is generated numerically from a single capture. We recently investigated the application of a number of sparsity metrics for DHM autofocusing and in this paper we extend this work to include more such metrics, and apply them over a greater range of biological diatom cells and magnification/numerical apertures. We demonstrate for the first time that these metrics may be grouped together according to matching behavior following high pass filtering.

  11. Perceptual quality prediction on authentically distorted images using a bag of features approach

    PubMed Central

    Ghadiyaram, Deepti; Bovik, Alan C.

    2017-01-01

    Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417

  12. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    PubMed

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  13. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  14. Image stitching and image reconstruction of intestines captured using radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Wu, Yin-Yi; Dung, Lan-Rong; Wu, Hsien-Ming; Weng, Ping-Kuo; Huang, Ker-Jer; Chiu, Luan-Jiau

    2012-05-01

    This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.

  15. Signature detection and matching for document image retrieval.

    PubMed

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  16. Improved wheal detection from skin prick test images

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan

    2014-03-01

    Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.

  17. ROCView: prototype software for data collection in jackknife alternative free-response receiver operating characteristic analysis

    PubMed Central

    Thompson, J; Hogg, P; Thompson, S; Manning, D; Szczepura, K

    2012-01-01

    ROCView has been developed as an image display and response capture (IDRC) solution to image display and consistent recording of reader responses in relation to the free-response receiver operating characteristic paradigm. A web-based solution to IDRC for observer response studies allows observations to be completed from any location, assuming that display performance and viewing conditions are consistent with the study being completed. The simplistic functionality of the software allows observations to be completed without supervision. ROCView can display images from multiple modalities, in a randomised order if required. Following registration, observers are prompted to begin their image evaluation. All data are recorded via mouse clicks, one to localise (mark) and one to score confidence (rate) using either an ordinal or continuous rating scale. Up to nine “mark-rating” pairs can be made per image. Unmarked images are given a default score of zero. Upon completion of the study, both true-positive and false-positive reports can be downloaded and adapted for analysis. ROCView has the potential to be a useful tool in the assessment of modality performance difference for a range of imaging methods. PMID:22573294

  18. Not looking yourself: The cost of self-selecting photographs for identity verification.

    PubMed

    White, David; Burton, Amy L; Kemp, Richard I

    2016-05-01

    Photo-identification is based on the premise that photographs are representative of facial appearance. However, previous studies show that ratings of likeness vary across different photographs of the same face, suggesting that some images capture identity better than others. Two experiments were designed to examine the relationship between likeness judgments and face matching accuracy. In Experiment 1, we compared unfamiliar face matching accuracy for self-selected and other-selected high-likeness images. Surprisingly, images selected by previously unfamiliar viewers - after very limited exposure to a target face - were more accurately matched than self-selected images chosen by the target identity themselves. Results also revealed extremely low inter-rater agreement in ratings of likeness across participants, suggesting that perceptions of image resemblance are inherently unstable. In Experiment 2, we test whether the cost of self-selection can be explained by this general disagreement in likeness judgments between individual raters. We find that averaging across rankings by multiple raters produces image selections that provide superior identification accuracy. However, benefit of other-selection persisted for single raters, suggesting that inaccurate representations of self interfere with our ability to judge which images faithfully represent our current appearance. © 2015 The British Psychological Society.

  19. Comparison of Fundus Autofluorescence Between Fundus Camera and Confocal Scanning Laser Ophthalmoscope–based Systems

    PubMed Central

    Park, Sung Pyo; Siringo, Frank S.; Pensec, Noelle; Hong, In Hwan; Sparrow, Janet; Barile, Gaetano; Tsang, Stephen H.; Chang, Stanley

    2015-01-01

    BACKGROUND AND OBJECTIVE To compare fundus autofluorescence (FAF) imaging via fundus camera (FC) and confocal scanning laser ophthalmoscope (cSLO). PATIENTS AND METHODS FAF images were obtained with a digital FC (530 to 580 nm excitation) and a cSLO (488 nm excitation). Two authors evaluated correlation of autofluorescence pattern, atrophic lesion size, and image quality between the two devices. RESULTS In 120 eyes, the autofluorescence pattern correlated in 86% of lesions. By lesion subtype, correlation rates were 100% in hemorrhage, 97% in geographic atrophy, 82% in flecks, 75% in drusen, 70% in exudates, 67% in pigment epithelial detachment, 50% in fibrous scars, and 33% in macular hole. The mean lesion size in geographic atrophy was 4.57 ± 2.3 mm2 via cSLO and 3.81 ± 1.94 mm2 via FC (P < .0001). Image quality favored cSLO in 71 eyes. CONCLUSION FAF images were highly correlated between the FC and cSLO. Differences between the two devices revealed contrasts. Multiple image capture and confocal optics yielded higher image contrast with the cSLO, although acquisition and exposure time was longer. PMID:24221461

  20. Retrospective respiration-gated whole-body photoacoustic computed tomography of mice

    NASA Astrophysics Data System (ADS)

    Xia, Jun; Chen, Wanyi; Maslov, Konstantin; Anastasio, Mark A.; Wang, Lihong V.

    2014-01-01

    Photoacoustic tomography (PAT) is an emerging technique that has a great potential for preclinical whole-body imaging. To date, most whole-body PAT systems require multiple laser shots to generate one cross-sectional image, yielding a frame rate of <1 Hz. Because a mouse breathes at up to 3 Hz, without proper gating mechanisms, acquired images are susceptible to motion artifacts. Here, we introduce, for the first time to our knowledge, retrospective respiratory gating for whole-body photoacoustic computed tomography. This new method involves simultaneous capturing of the animal's respiratory waveform during photoacoustic data acquisition. The recorded photoacoustic signals are sorted and clustered according to the respiratory phase, and an image of the animal at each respiratory phase is reconstructed subsequently from the corresponding cluster. The new method was tested in a ring-shaped confocal photoacoustic computed tomography system with a hardware-limited frame rate of 0.625 Hz. After respiratory gating, we observed sharper vascular and anatomical images at different positions of the animal body. The entire breathing cycle can also be visualized at 20 frames/cycle.

  1. Fluorescence laminar optical tomography for brain imaging: system implementation and performance evaluation.

    PubMed

    Azimipour, Mehdi; Sheikhzadeh, Mahya; Baumgartner, Ryan; Cullen, Patrick K; Helmstetter, Fred J; Chang, Woo-Jin; Pashaie, Ramin

    2017-01-01

    We present our effort in implementing a fluorescence laminar optical tomography scanner which is specifically designed for noninvasive three-dimensional imaging of fluorescence proteins in the brains of small rodents. A laser beam, after passing through a cylindrical lens, scans the brain tissue from the surface while the emission signal is captured by the epi-fluorescence optics and is recorded using an electron multiplication CCD sensor. Image reconstruction algorithms are developed based on Monte Carlo simulation to model light–tissue interaction and generate the sensitivity matrices. To solve the inverse problem, we used the iterative simultaneous algebraic reconstruction technique. The performance of the developed system was evaluated by imaging microfabricated silicon microchannels embedded inside a substrate with optical properties close to the brain as a tissue phantom and ultimately by scanning brain tissue in vivo. Details of the hardware design and reconstruction algorithms are discussed and several experimental results are presented. The developed system can specifically facilitate neuroscience experiments where fluorescence imaging and molecular genetic methods are used to study the dynamics of the brain circuitries.

  2. Multichannel optical mapping: investigation of depth information

    NASA Astrophysics Data System (ADS)

    Sase, Ichiro; Eda, Hideo; Seiyama, Akitoshi; Tanabe, Hiroki C.; Takatsuki, Akira; Yanagida, Toshio

    2001-06-01

    Near infrared (NIR) light has become a powerful tool for non-invasive imaging of human brain activity. Many systems have been developed to capture the changes in regional brain blood flow and hemoglobin oxygenation, which occur in the human cortex in response to neural activity. We have developed a multi-channel reflectance imaging system, which can be used as a `mapping device' and also as a `multi-channel spectrophotometer'. In the present study, we visualized changes in the hemodynamics of the human occipital region in multiple ways. (1) Stimulating left and right primary visual cortex independently by showing sector shaped checkerboards sequentially over the contralateral visual field, resulted in corresponding changes in the hemodynamics observed by `mapping' measurement. (2) Simultaneous measurement of functional-MRI and NIR (changes in total hemoglobin) during visual stimulation showed good spatial and temporal correlation with each other. (3) Placing multiple channels densely over the occipital region demonstrated spatial patterns more precisely, and depth information was also acquired by placing each pair of illumination and detection fibers at various distances. These results indicate that optical method can provide data for 3D analysis of human brain functions.

  3. Study on the recent severe thunderstorms in northern India

    NASA Astrophysics Data System (ADS)

    Vishwanathan, Gokul; Narayanan, Sunanda; Mrudula, G.

    2016-05-01

    Thunderstorm, resulting from vigorous convective activity, is one of the most spectacular weather phenomena in the atmosphere which is associated with thunder, squall lines and lightening. On 13 April 2010, a severe storm struck parts of Bangladesh and eastern India which lasted about 90 minutes, with the most intense portion spanning 30-40 minutes. The severe Thunderstorm on 13th April 2010 spawned a large tornado, which lasted about 20 minutes and was the first tornado recorded in Bihar history. In the year 2015, Bihar experienced a similar storm on 21 April during which multiple microbursts were observed. Various meteorological parameters have been analyzed to study the factors affecting the development of the thunderstorm. Satellite images from KALPANA and Meteosat has been analyzed to capture the temporal and spatial evolution of these storms. The satellite images show the development of a convective clouds system in the early afternoon hours which developed further into the severe storms by late evening. The analysis carried out further using K-index, lifted index, CAPE etc also shows the development of multiple cells of convection. Further analysis of these storms is presented in the paper.

  4. Image charge effects on electron capture by dust grains in dusty plasmas.

    PubMed

    Jung, Y D; Tawara, H

    2001-07-01

    Electron-capture processes by negatively charged dust grains from hydrogenic ions in dusty plasmas are investigated in accordance with the classical Bohr-Lindhard model. The attractive interaction between the electron in a hydrogenic ion and its own image charge inside the dust grain is included to obtain the total interaction energy between the electron and the dust grain. The electron-capture radius is determined by the total interaction energy and the kinetic energy of the released electron in the frame of the projectile dust grain. The classical straight-line trajectory approximation is applied to the motion of the ion in order to visualize the electron-capture cross section as a function of the impact parameter, kinetic energy of the projectile ion, and dust charge. It is found that the image charge inside the dust grain plays a significant role in the electron-capture process near the surface of the dust grain. The electron-capture cross section is found to be quite sensitive to the collision energy and dust charge.

  5. FRAP Analysis: Accounting for Bleaching during Image Capture

    PubMed Central

    Wu, Jun; Shekhar, Nandini; Lele, Pushkar P.; Lele, Tanmay P.

    2012-01-01

    The analysis of Fluorescence Recovery After Photobleaching (FRAP) experiments involves mathematical modeling of the fluorescence recovery process. An important feature of FRAP experiments that tends to be ignored in the modeling is that there can be a significant loss of fluorescence due to bleaching during image capture. In this paper, we explicitly include the effects of bleaching during image capture in the model for the recovery process, instead of correcting for the effects of bleaching using reference measurements. Using experimental examples, we demonstrate the usefulness of such an approach in FRAP analysis. PMID:22912750

  6. A methodology for evaluating detection performance of ultrasonic array imaging algorithms for coarse-grained materials.

    PubMed

    Van Pamel, Anton; Brett, Colin R; Lowe, Michael J S

    2014-12-01

    Improving the ultrasound inspection capability for coarse-grained metals remains of longstanding interest and is expected to become increasingly important for next-generation electricity power plants. Conventional ultrasonic A-, B-, and C-scans have been found to suffer from strong background noise caused by grain scattering, which can severely limit the detection of defects. However, in recent years, array probes and full matrix capture (FMC) imaging algorithms have unlocked exciting possibilities for improvements. To improve and compare these algorithms, we must rely on robust methodologies to quantify their performance. This article proposes such a methodology to evaluate the detection performance of imaging algorithms. For illustration, the methodology is applied to some example data using three FMC imaging algorithms; total focusing method (TFM), phase-coherent imaging (PCI), and decomposition of the time-reversal operator with multiple scattering filter (DORT MSF). However, it is important to note that this is solely to illustrate the methodology; this article does not attempt the broader investigation of different cases that would be needed to compare the performance of these algorithms in general. The methodology considers the statistics of detection, presenting the detection performance as probability of detection (POD) and probability of false alarm (PFA). A test sample of coarse-grained nickel super alloy, manufactured to represent materials used for future power plant components and containing some simple artificial defects, is used to illustrate the method on the candidate algorithms. The data are captured in pulse-echo mode using 64-element array probes at center frequencies of 1 and 5 MHz. In this particular case, it turns out that all three algorithms are shown to perform very similarly when comparing their flaw detection capabilities.

  7. Light field rendering with omni-directional camera

    NASA Astrophysics Data System (ADS)

    Todoroki, Hiroshi; Saito, Hideo

    2003-06-01

    This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.

  8. High Density Aerial Image Matching: State-Of and Future Prospects

    NASA Astrophysics Data System (ADS)

    Haala, N.; Cavegn, S.

    2016-06-01

    Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.

  9. Mapping Land and Water Surface Topography with instantaneous Structure from Motion

    NASA Astrophysics Data System (ADS)

    Dietrich, J.; Fonstad, M. A.

    2012-12-01

    Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.

  10. NASA CloudSat Captures Hurricane Daniel Transformation

    NASA Image and Video Library

    2006-07-25

    Hurricane Daniel intensified between July 18 and July 23rd. NASA new CloudSat satellite was able to capture and confirm this transformation in its side-view images of Hurricane Daniel as seen in this series of images

  11. 3D reconstruction based on light field images

    NASA Astrophysics Data System (ADS)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  12. Compact touchless fingerprint reader based on digital variable-focus liquid lens

    NASA Astrophysics Data System (ADS)

    Tsai, C. W.; Wang, P. J.; Yeh, J. A.

    2014-09-01

    Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.

  13. Spatially selective photonic crystal enhanced fluorescence and application to background reduction for biomolecule detection assays

    PubMed Central

    Chaudhery, Vikram; Huang, Cheng-Sheng; Pokhriyal, Anusha; Polans, James; Cunningham, Brian T.

    2011-01-01

    By combining photonic crystal label-free biosensor imaging with photonic crystal enhanced fluorescence, it is possible to selectively enhance the fluorescence emission from regions of the PC surface based upon the density of immobilized capture molecules. A label-free image of the capture molecules enables determination of optimal coupling conditions of the laser used for fluorescence imaging of the photonic crystal surface on a pixel-by-pixel basis, allowing maximization of fluorescence enhancement factor from regions incorporating a biomolecule capture spot and minimization of background autofluorescence from areas between capture spots. This capability significantly improves the contrast of enhanced fluorescent images, and when applied to an antibody protein microarray, provides a substantial advantage over conventional fluorescence microscopy. Using the new approach, we demonstrate detection limits as low as 0.97 pg/ml for a representative protein biomarker in buffer. PMID:22109210

  14. Spatially selective photonic crystal enhanced fluorescence and application to background reduction for biomolecule detection assays.

    PubMed

    Chaudhery, Vikram; Huang, Cheng-Sheng; Pokhriyal, Anusha; Polans, James; Cunningham, Brian T

    2011-11-07

    By combining photonic crystal label-free biosensor imaging with photonic crystal enhanced fluorescence, it is possible to selectively enhance the fluorescence emission from regions of the PC surface based upon the density of immobilized capture molecules. A label-free image of the capture molecules enables determination of optimal coupling conditions of the laser used for fluorescence imaging of the photonic crystal surface on a pixel-by-pixel basis, allowing maximization of fluorescence enhancement factor from regions incorporating a biomolecule capture spot and minimization of background autofluorescence from areas between capture spots. This capability significantly improves the contrast of enhanced fluorescent images, and when applied to an antibody protein microarray, provides a substantial advantage over conventional fluorescence microscopy. Using the new approach, we demonstrate detection limits as low as 0.97 pg/ml for a representative protein biomarker in buffer.

  15. Whole surface image reconstruction for machine vision inspection of fruit

    NASA Astrophysics Data System (ADS)

    Reese, D. Y.; Lefcourt, A. M.; Kim, M. S.; Lo, Y. M.

    2007-09-01

    Automated imaging systems offer the potential to inspect the quality and safety of fruits and vegetables consumed by the public. Current automated inspection systems allow fruit such as apples to be sorted for quality issues including color and size by looking at a portion of the surface of each fruit. However, to inspect for defects and contamination, the whole surface of each fruit must be imaged. The goal of this project was to develop an effective and economical method for whole surface imaging of apples using mirrors and a single camera. Challenges include mapping the concave stem and calyx regions. To allow the entire surface of an apple to be imaged, apples were suspended or rolled above the mirrors using two parallel music wires. A camera above the apples captured 90 images per sec (640 by 480 pixels). Single or multiple flat or concave mirrors were mounted around the apple in various configurations to maximize surface imaging. Data suggest that the use of two flat mirrors provides inadequate coverage of a fruit but using two parabolic concave mirrors allows the entire surface to be mapped. Parabolic concave mirrors magnify images, which results in greater pixel resolution and reduced distortion. This result suggests that a single camera with two parabolic concave mirrors can be a cost-effective method for whole surface imaging.

  16. Lack of magnetic resonance imaging lesion activity as a treatment target in multiple sclerosis: An evaluation using electronically collected outcomes.

    PubMed

    Conway, Devon S; Thompson, Nicolas R; Cohen, Jeffrey A

    2016-09-01

    The appropriate treatment target in multiple sclerosis (MS) is unclear. Lack of magnetic resonance imaging (MRI) lesion activity, a component of the no evidence of disease activity concept, has been proposed as a treatment target in MS. We used our MS database to investigate whether aggressively pursuing MRI stability by changing disease modifying therapy (DMT) when MRI activity is observed leads to better clinical and imaging outcomes. The Knowledge Program (KP) is a database linked to our electronic medical record allowing capture of patient and clinician reported outcomes. Through KP query and chart review, we identified all relapsing-remitting MS patients visiting between 1 January 2008 and 31 December 2014 with active MRIs despite DMT. Propensity modeling based on demographic and disease characteristics was used to match DMT switchers to non-switchers. KP and MRI outcomes were compared 18 months after the active MRI using mixed-effects linear regression models. We identified 417 patients who met criteria for our analysis. After propensity matching, 78 switchers and 91 non-switchers were analyzed. There was no difference in clinical or radiologic outcomes between these groups at 18 months. We did not find a short-term benefit of changing DMT to pursue MRI stability. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. VLTI Imaging of a High-Mass Protobinary System: Unveiling the Dynamical Processes in High-Mass Star Formation

    NASA Astrophysics Data System (ADS)

    Kraus, S.; Kluska, J.; Kreplin, A.; Bate, M.; Harries, T.; Hofmann, K.-H.; Hone, E.; Monnier, J.; Weigelt, G.; Anugu, N.; de Wit, W.-J..; Wittkowski, M.

    2017-12-01

    High-mass stars exhibit a significantly higher multiplicity frequency than low-mass stars, likely reflecting differences in how they formed. Theory suggests that high-mass binaries may form by the fragmentation of self-gravitational discs or by alternative scenarios such as disc-assisted capture. Near-infrared interferometric observations reveal the high-mass young stellar object IRAS 17216-3801 to be a close high-mass protobinary with a separation of 0.058 arcseconds ( 170 au). This is the closest high-mass protobinary system imaged to date. We also resolve near- infrared excess emission around the individual stars, which is associated with hot dust in circumstellar discs. These discs are strongly misaligned with respect to the binary separation vector, indicating that tidal forces have not yet had time to realign. We measure a higher accretion rate towards the circumsecondary disc, confirming a hydrodynamic effect where the secondary star disrupts the primary star’s accretion stream and effectively limits the mass that the primary star can accrete. NACO L'-band imaging may also have resolved the circumbinary disc that feeds the accretion onto the circumstellar discs. This discovery demonstrates the unique capabilities of the VLTI, creating exciting new opportunities to study the dynamical processes that govern the architecture of close multiple systems.

  18. Bessel Fourier Orientation Reconstruction (BFOR): An Analytical Diffusion Propagator Reconstruction for Hybrid Diffusion Imaging and Computation of q-Space Indices

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853

  19. Wave analysis of a plenoptic system and its applications

    NASA Astrophysics Data System (ADS)

    Shroff, Sapna A.; Berkner, Kathrin

    2013-03-01

    Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.

  20. A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading

    NASA Astrophysics Data System (ADS)

    Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.

    2018-05-01

    Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.

  1. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    PubMed

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  2. Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array.

    PubMed

    Phillips, Zachary F; D'Ambrosio, Michael V; Tian, Lei; Rulison, Jared J; Patel, Hurshal S; Sadras, Nitin; Gande, Aditya V; Switz, Neil A; Fletcher, Daniel A; Waller, Laura

    2015-01-01

    We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.

  3. Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array

    PubMed Central

    Phillips, Zachary F.; D'Ambrosio, Michael V.; Tian, Lei; Rulison, Jared J.; Patel, Hurshal S.; Sadras, Nitin; Gande, Aditya V.; Switz, Neil A.; Fletcher, Daniel A.; Waller, Laura

    2015-01-01

    We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities. PMID:25969980

  4. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  5. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Benien, Parul; Ozcan, Aydogan

    2017-06-01

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of 0.8 cm2 and weighs only 180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved a limit of detection of 12 cysts per 10 ml, an average cyst capture efficiency of 79%, and an accuracy of 95%. Providing rapid detection and quantification of waterborne pathogens without the need for a microbiology expert, this field-portable imaging and sensing platform running on a smartphone could be very useful for water quality monitoring in resource-limited settings.

  6. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Gul, M. Shahzeb Khan; Gunturk, Bahadir K.

    2018-05-01

    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.

  7. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.

    PubMed

    Gul, M Shahzeb Khan; Gunturk, Bahadir K

    2018-05-01

    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.

  8. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  9. LLNL-G3Dv3: Global P wave tomography model for improved regional and teleseismic travel time prediction: LLNL-G3DV3---GLOBAL P WAVE TOMOGRAPHY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, N. A.; Myers, S. C.; Johannesson, G.

    [1] We develop a global-scale P wave velocity model (LLNL-G3Dv3) designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The model provides a new image of Earth's interior, but the underlying practical purpose of the model is to provide enhanced seismic event location capabilities. The LLNL-G3Dv3 model is based on ∼2.8 millionP and Pnarrivals that are re-processed using our global multiple-event locator called Bayesloc. We construct LLNL-G3Dv3 within a spherical tessellation based framework, allowing for explicit representation of undulating and discontinuous layers including the crust and transition zone layers. Using a multiscale inversion technique, regional trendsmore » as well as fine details are captured where the data allow. LLNL-G3Dv3 exhibits large-scale structures including cratons and superplumes as well numerous complex details in the upper mantle including within the transition zone. Particularly, the model reveals new details of a vast network of subducted slabs trapped within the transition beneath much of Eurasia, including beneath the Tibetan Plateau. We demonstrate the impact of Bayesloc multiple-event location on the resulting tomographic images through comparison with images produced without the benefit of multiple-event constraints (single-event locations). We find that the multiple-event locations allow for better reconciliation of the large set of direct P phases recorded at 0–97° distance and yield a smoother and more continuous image relative to the single-event locations. Travel times predicted from a 3-D model are also found to be strongly influenced by the initial locations of the input data, even when an iterative inversion/relocation technique is employed.« less

  10. Structured Illumination Diffuse Optical Tomography for Mouse Brain Imaging

    NASA Astrophysics Data System (ADS)

    Reisman, Matthew David

    As advances in functional magnetic resonance imaging (fMRI) have transformed the study of human brain function, they have also widened the divide between standard research techniques used in humans and those used in mice, where high quality images are difficult to obtain using fMRI given the small volume of the mouse brain. Optical imaging techniques have been developed to study mouse brain networks, which are highly valuable given the ability to study brain disease treatments or development in a controlled environment. A planar imaging technique known as optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to imaging a 2-dimensional view of superficial cortical tissues. Diffuse optical tomography (DOT) is a non-invasive, volumetric neuroimaging technique that has been valuable for bedside imaging of patients in the clinic, but previous DOT systems for rodent neuroimaging have been limited by either sparse spatial sampling or by slow speed. My research has been to develop diffuse optical tomography for whole brain mouse neuroimaging by expanding previous techniques to achieve high spatial sampling using multiple camera views for detection and high speed using structured illumination sources. I have shown the feasibility of this method to perform non-invasive functional neuroimaging in mice and its capabilities of imaging the entire volume of the brain. Additionally, the system has been built with a custom, flexible framework to accommodate the expansion to imaging multiple dynamic contrasts in the brain and populations that were previously difficult or impossible to image, such as infant mice and awake mice. I have contributed to preliminary feasibility studies of these more advanced techniques using OIS, which can now be carried out using the structured illumination diffuse optical tomography technique to perform longitudinal, non-invasive studies of the whole volume of the mouse brain.

  11. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  12. Sparsity-based image monitoring of crystal size distribution during crystallization

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Huo, Yan; Ma, Cai Y.; Wang, Xue Z.

    2017-07-01

    To facilitate monitoring crystal size distribution (CSD) during a crystallization process by using an in-situ imaging system, a sparsity-based image analysis method is proposed for real-time implementation. To cope with image degradation arising from in-situ measurement subject to particle motion, solution turbulence, and uneven illumination background in the crystallizer, sparse representation of a real-time captured crystal image is developed based on using an in-situ image dictionary established in advance, such that the noise components in the captured image can be efficiently removed. Subsequently, the edges of a crystal shape in a captured image are determined in terms of the salience information defined from the denoised crystal images. These edges are used to derive a blur kernel for reconstruction of a denoised image. A non-blind deconvolution algorithm is given for the real-time reconstruction. Consequently, image segmentation can be easily performed for evaluation of CSD. The crystal image dictionary and blur kernels are timely updated in terms of the imaging conditions to improve the restoration efficiency. An experimental study on the cooling crystallization of α-type L-glutamic acid (LGA) is shown to demonstrate the effectiveness and merit of the proposed method.

  13. Coded aperture solution for improving the performance of traffic enforcement cameras

    NASA Astrophysics Data System (ADS)

    Masoudifar, Mina; Pourreza, Hamid Reza

    2016-10-01

    A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.

  14. A Tool for Interactive Data Visualization: Application to Over 10,000 Brain Imaging and Phantom MRI Data Sets.

    PubMed

    Panta, Sandeep R; Wang, Runtang; Fries, Jill; Kalyanam, Ravi; Speer, Nicole; Banich, Marie; Kiehl, Kent; King, Margaret; Milham, Michael; Wager, Tor D; Turner, Jessica A; Plis, Sergey M; Calhoun, Vince D

    2016-01-01

    In this paper we propose a web-based approach for quick visualization of big data from brain magnetic resonance imaging (MRI) scans using a combination of an automated image capture and processing system, nonlinear embedding, and interactive data visualization tools. We draw upon thousands of MRI scans captured via the COllaborative Imaging and Neuroinformatics Suite (COINS). We then interface the output of several analysis pipelines based on structural and functional data to a t-distributed stochastic neighbor embedding (t-SNE) algorithm which reduces the number of dimensions for each scan in the input data set to two dimensions while preserving the local structure of data sets. Finally, we interactively display the output of this approach via a web-page, based on data driven documents (D3) JavaScript library. Two distinct approaches were used to visualize the data. In the first approach, we computed multiple quality control (QC) values from pre-processed data, which were used as inputs to the t-SNE algorithm. This approach helps in assessing the quality of each data set relative to others. In the second case, computed variables of interest (e.g., brain volume or voxel values from segmented gray matter images) were used as inputs to the t-SNE algorithm. This approach helps in identifying interesting patterns in the data sets. We demonstrate these approaches using multiple examples from over 10,000 data sets including (1) quality control measures calculated from phantom data over time, (2) quality control data from human functional MRI data across various studies, scanners, sites, (3) volumetric and density measures from human structural MRI data across various studies, scanners and sites. Results from (1) and (2) show the potential of our approach to combine t-SNE data reduction with interactive color coding of variables of interest to quickly identify visually unique clusters of data (i.e., data sets with poor QC, clustering of data by site) quickly. Results from (3) demonstrate interesting patterns of gray matter and volume, and evaluate how they map onto variables including scanners, age, and gender. In sum, the proposed approach allows researchers to rapidly identify and extract meaningful information from big data sets. Such tools are becoming increasingly important as datasets grow larger.

  15. An Example-Based Super-Resolution Algorithm for Selfie Images

    PubMed Central

    William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep

    2016-01-01

    A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500

  16. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  17. Imaging_Earth_With_MUSES

    NASA Image and Video Library

    2017-07-11

    Commercial businesses and scientific researchers have a new capability to capture digital imagery of Earth, thanks to MUSES: the Multiple User System for Earth Sensing facility. This platform on the outside of the International Space Station is capable of holding four different payloads, ranging from high-resolution digital cameras to hyperspectral imagers, which will support Earth science observations in agricultural awareness, air quality, disaster response, fire detection, and many other research topics. MUSES program manager Mike Soutullo explains the system and its unique features including the ability to change and upgrade payloads using the space station’s Canadarm2 and Special Purpose Dexterous Manipulator. For more information about MUSES, please visit: https://www.nasa.gov/mission_pages/station/research/news/MUSES For more on ISS science, https://www.nasa.gov/mission_pages/station/research/index.html or follow us on Twitter @ISS_research

  18. Fully automated laser ray tracing system to measure changes in the crystalline lens GRIN profile.

    PubMed

    Qiu, Chen; Maceo Heilman, Bianca; Kaipio, Jari; Donaldson, Paul; Vaghefi, Ehsan

    2017-11-01

    Measuring the lens gradient refractive index (GRIN) accurately and reliably has proven an extremely challenging technical problem. A fully automated laser ray tracing (LRT) system was built to address this issue. The LRT system captures images of multiple laser projections before and after traversing through an ex vivo lens. These LRT images, combined with accurate measurements of the lens geometry, are used to calculate the lens GRIN profile. Mathematically, this is an ill-conditioned problem; hence, it is essential to apply biologically relevant constraints to produce a feasible solution. The lens GRIN measurements were compared with previously published data. Our GRIN retrieval algorithm produces fast and accurate measurements of the lens GRIN profile. Experiments to study the optics of physiologically perturbed lenses are the future direction of this research.

  19. Fully automated laser ray tracing system to measure changes in the crystalline lens GRIN profile

    PubMed Central

    Qiu, Chen; Maceo Heilman, Bianca; Kaipio, Jari; Donaldson, Paul; Vaghefi, Ehsan

    2017-01-01

    Measuring the lens gradient refractive index (GRIN) accurately and reliably has proven an extremely challenging technical problem. A fully automated laser ray tracing (LRT) system was built to address this issue. The LRT system captures images of multiple laser projections before and after traversing through an ex vivo lens. These LRT images, combined with accurate measurements of the lens geometry, are used to calculate the lens GRIN profile. Mathematically, this is an ill-conditioned problem; hence, it is essential to apply biologically relevant constraints to produce a feasible solution. The lens GRIN measurements were compared with previously published data. Our GRIN retrieval algorithm produces fast and accurate measurements of the lens GRIN profile. Experiments to study the optics of physiologically perturbed lenses are the future direction of this research. PMID:29188093

  20. Real-time Mesoscale Visualization of Dynamic Damage and Reaction in Energetic Materials under Impact

    NASA Astrophysics Data System (ADS)

    Chen, Wayne; Harr, Michael; Kerschen, Nicholas; Maris, Jesus; Guo, Zherui; Parab, Niranjan; Sun, Tao; Fezzaa, Kamel; Son, Steven

    Energetic materials may be subjected to impact and vibration loading. Under these dynamic loadings, local stress or strain concentrations may lead to the formation of hot spots and unintended reaction. To visualize the dynamic damage and reaction processes in polymer bonded energetic crystals under dynamic compressive loading, a high speed X-ray phase contrast imaging setup was synchronized with a Kolsky bar and a light gas gun. Controlled compressive loading was applied on PBX specimens with a single or multiple energetic crystal particles and impact-induced damage and reaction processes were captured using the high speed X-ray imaging setup. Impact velocities were systematically varied to explore the critical conditions for reaction. At lower loading rates, ultrasonic exercitations were also applied to progressively damage the crystals, eventually leading to reaction. AFOSR, ONR.

  1. High-repetition-rate interferometric Rayleigh scattering for flow-velocity measurements

    NASA Astrophysics Data System (ADS)

    Estevadeordal, Jordi; Jiang, Naibo; Cutler, Andrew D.; Felver, Josef J.; Slipchenko, Mikhail N.; Danehy, Paul M.; Gord, James R.; Roy, Sukesh

    2018-03-01

    High-repetition-rate interferometric-Rayleigh-scattering (IRS) velocimetry is demonstrated for non-intrusive, high-speed flow-velocity measurements. High temporal resolution is obtained with a quasi-continuous burst-mode laser that is capable of operating at 10-100 kHz, providing 10-ms bursts with pulse widths of 5-1000 ns and pulse energy > 100 mJ at 532 nm. Coupled with a high-speed camera system, the IRS method is based on imaging the flow field through an etalon with 8-GHz free spectral range and capturing the Doppler shift of the Rayleigh-scattered light from the flow at multiple points having constructive interference. The seed-laser linewidth permits a laser linewidth of < 150 MHz at 532 nm. The technique is demonstrated in a high-speed jet, and high-repetition-rate image sequences are shown.

  2. Surface geophysical methods for characterising frozen ground in transitional permafrost landscapes

    USGS Publications Warehouse

    Briggs, Martin A.; Campbell, Seth; Nolan, Jay; Walvoord, Michelle Ann; Ntarlagiannis, Dimitrios; Day-Lewis, Frederick D.; Lane, John W.

    2017-01-01

    The distribution of shallow frozen ground is paramount to research in cold regions, and is subject to temporal and spatial changes influenced by climate, landscape disturbance and ecosystem succession. Remote sensing from airborne and satellite platforms is increasing our understanding of landscape-scale permafrost distribution, but typically lacks the resolution to characterise finer-scale processes and phenomena, which are better captured by integrated surface geophysical methods. Here, we demonstrate the use of electrical resistivity imaging (ERI), electromagnetic induction (EMI), ground penetrating radar (GPR) and infrared imaging over multiple summer field seasons around the highly dynamic Twelvemile Lake, Yukon Flats, central Alaska, USA. Twelvemile Lake has generally receded in the past 30 yr, allowing permafrost aggradation in the receded margins, resulting in a mosaic of transient frozen ground adjacent to thick, older permafrost outside the original lakebed. ERI and EMI best evaluated the thickness of shallow, thin permafrost aggradation, which was not clear from frost probing or GPR surveys. GPR most precisely estimated the depth of the active layer, which forward electrical resistivity modelling indicated to be a difficult target for electrical methods, but could be more tractable in time-lapse mode. Infrared imaging of freshly dug soil pit walls captured active-layer thermal gradients at unprecedented resolution, which may be useful in calibrating emerging numerical models. GPR and EMI were able to cover landscape scales (several kilometres) efficiently, and new analysis software showcased here yields calibrated EMI data that reveal the complicated distribution of shallow permafrost in a transitional landscape.

  3. High-dynamic-range imaging for cloud segmentation

    NASA Astrophysics Data System (ADS)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  4. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  5. General Mode Scanning Probe Microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somnath, Suhas; Jesse, Stephen

    A critical part of SPM measurements is the information transfer from the probe-sample junction to the measurement system. Current information transfer methods heavily compress the information-rich data stream by averaging the data over a time interval, or via heterodyne detection approaches such as lock-in amplifiers and phase-locked loops. As a consequence, highly valuable information at the sub-microsecond time scales or information from frequencies outside the measurement band is lost. We have developed a fundamentally new approach called General Mode (G-mode), where we can capture the complete information stream from the detectors in the microscope. The availability of the complete informationmore » allows the microscope operator to analyze the data via information-theory analysis or comprehensive physical models. Furthermore, the complete data stream enables advanced data-driven filtering algorithms, multi-resolution imaging, ultrafast spectroscropic imaging, spatial mapping of multidimensional variability in material properties, etc. Though we applied this approach to scanning probe microscopy, the general philosophy of G-mode can be applied to many other modes of microscopy. G-mode data is captured by completely custom software written in LabVIEW and Matlab. The software generates the waveforms to electrically, thermally, or mechanically excite the SPM probe. It handles real-time communications with the microscope software for operations such as moving the SPM probe position and also controls other instrumentation hardware. The software also controls multiple variants of high-speed data acquisition cards to excite the SPM probe with the excitation waveform and simultaneously measure multiple channels of information from the microscope detectors at sampling rates of 1-100 MHz. The software also saves the raw data to the computer and allows the microscope operator to visualize processed or filtered data during the experiment. The software performs all these features while offering a user-friendly interface.« less

  6. Point process models for localization and interdependence of punctate cellular structures.

    PubMed

    Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F

    2016-07-01

    Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  7. Groupwise Image Registration Guided by a Dynamic Digraph of Images.

    PubMed

    Tang, Zhenyu; Fan, Yong

    2016-04-01

    For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.

  8. NASA Satellite Captures Tropical Cyclones Tomas and Ului

    NASA Image and Video Library

    2010-03-17

    NASA Image acquired March 14 - 15, 2010 Two fierce tropical cyclones raged over the South Pacific Ocean in mid-March 2010, the U.S. Navy’s Joint Typhoon Warning Center (JTWC) reported. Over the Solomon Islands, Tropical Cyclone Ului had maximum sustained winds of 130 knots (240 kilometers per hour, 150 miles per hour) and gusts up to 160 knots (300 km/hr, 180 mph). Over Fiji, Tropical Cyclone Tomas had maximum sustained winds of 115 knots (215 km/hr, 132 mph) and gusts up to 140 knots (260 km/hr, 160 mph). The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra and Aqua satellites captured both storms in multiple passes over the South Pacific on March 15, 2010, local time. The majority of the image is from the morning of March 15 (late March 14, UTC time) as seen by MODIS on the Terra satellite, with the right portion of the image having been acquired earliest. The wedge-shaped area right of center is from Aqua MODIS, and it was taken in the early afternoon of March 15 (local time). Although it packs less powerful winds, according to the JTWC, Tomas stretches across a larger area. It was moving over the northern Fiji islands when Terra MODIS captured the right portion of the image. According to early reports, Tomas forced more than 5,000 people from their homes while the islands sustained damage to crops and buildings. The JTWC reported that Tomas had traveled slowly toward the south and was passing over an area of high sea surface temperatures. (Warm seas provide energy for cyclones.) This storm was expected to intensify before transitioning to an extratropical storm. Ului is more compact and more powerful. A few hours before this image was taken, the storm had been an extremely dangerous Category 5 cyclone with sustained winds of 140 knots (260 km/hr, 160 mph). Ului degraded slightly before dealing the southern Solomon Islands a glancing blow. Initial news reports say that homes were damaged on the islands, but no one was injured. Like Tomas, Ului had been moving westward over an area of high sea surface temperatures. This storm was expected to continue moving westward before turning south and eventually weakening. The high-resolution image provided above is at 500 meters per pixel. The MODIS Rapid Response System provides this image at additional resolutions. NASA image by Jeff Schmaltz, MODIS Rapid Response Team, Goddard Space Flight Center. Caption by Michon Scott and Holli Riebeek. Instrument: Terra - MODIS To learn more about this image go here: earthobservatory.nasa.gov/IOTD/view.php?id=43154..

  9. Preliminary experiments on quantification of skin condition

    NASA Astrophysics Data System (ADS)

    Kitajima, Kenzo; Iyatomi, Hitoshi

    2014-03-01

    In this study, we investigated a preliminary assessment method for skin conditions such as a moisturizing property and its fineness of the skin with an image analysis only. We captured a facial images from volunteer subjects aged between 30s and 60s by Pocket Micro (R) device (Scalar Co., Japan). This device has two image capturing modes; the normal mode and the non-reflection mode with the aid of the equipped polarization filter. We captured skin images from a total of 68 spots from subjects' face using both modes (i.e. total of 136 skin images). The moisture-retaining property of the skin and subjective evaluation score of the skin fineness in 5-point scale for each case were also obtained in advance as a gold standard (their mean and SD were 35.15 +/- 3.22 (μS) and 3.45 +/- 1.17, respectively). We extracted a total of 107 image features from each image and built linear regression models for estimating abovementioned criteria with a stepwise feature selection. The developed model for estimating the skin moisture achieved the MSE of 1.92 (μS) with 6 selected parameters, while the model for skin fineness achieved that of 0.51 scales with 7 parameters under the leave-one-out cross validation. We confirmed the developed models predicted the moisture-retaining property and fineness of the skin appropriately with only captured image.

  10. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2009-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  11. Mining the Mind Research Network: A Novel Framework for Exploring Large Scale, Heterogeneous Translational Neuroscience Research Data Sources

    PubMed Central

    Bockholt, Henry J.; Scully, Mark; Courtney, William; Rachakonda, Srinivas; Scott, Adam; Caprihan, Arvind; Fries, Jill; Kalyanam, Ravi; Segall, Judith M.; de la Garza, Raul; Lane, Susan; Calhoun, Vince D.

    2009-01-01

    A neuroinformatics (NI) system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN), database system has been designed and improved through our experience with 200 research studies and 250 researchers from seven different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining. PMID:20461147

  12. A moving fluoroscope to capture tibiofemoral kinematics during complete cycles of free level and downhill walking as well as stair descent.

    PubMed

    List, Renate; Postolka, Barbara; Schütz, Pascal; Hitz, Marco; Schwilch, Peter; Gerber, Hans; Ferguson, Stephen J; Taylor, William R

    2017-01-01

    Videofluoroscopy has been shown to provide essential information in the evaluation of the functionality of total knee arthroplasties. However, due to the limitation in the field of view, most systems can only assess knee kinematics during highly restricted movements. To avoid the limitations of a static image intensifier, a moving fluoroscope has been presented as a standalone system that allows tracking of the knee during multiple complete cycles of level- and downhill-walking, as well as stair descent, in combination with the synchronous assessment of ground reaction forces and whole body skin marker measurements. Here, we assess the ability of the system to keep the knee in the field of view of the image intensifier. By measuring ten total knee arthroplasty subjects, we demonstrate that it is possible to maintain the knee to within 1.8 ± 1.4 cm vertically and 4.0 ± 2.6 cm horizontally of the centre of the intensifier throughout full cycles of activities of daily living. Since control of the system is based on real-time feedback of a wire sensor, the system is not dependent on repeatable gait patterns, but is rather able to capture pathological motion patterns with low inter-trial repeatability.

  13. A moving fluoroscope to capture tibiofemoral kinematics during complete cycles of free level and downhill walking as well as stair descent

    PubMed Central

    Postolka, Barbara; Schütz, Pascal; Hitz, Marco; Schwilch, Peter; Gerber, Hans

    2017-01-01

    Videofluoroscopy has been shown to provide essential information in the evaluation of the functionality of total knee arthroplasties. However, due to the limitation in the field of view, most systems can only assess knee kinematics during highly restricted movements. To avoid the limitations of a static image intensifier, a moving fluoroscope has been presented as a standalone system that allows tracking of the knee during multiple complete cycles of level- and downhill-walking, as well as stair descent, in combination with the synchronous assessment of ground reaction forces and whole body skin marker measurements. Here, we assess the ability of the system to keep the knee in the field of view of the image intensifier. By measuring ten total knee arthroplasty subjects, we demonstrate that it is possible to maintain the knee to within 1.8 ± 1.4 cm vertically and 4.0 ± 2.6 cm horizontally of the centre of the intensifier throughout full cycles of activities of daily living. Since control of the system is based on real-time feedback of a wire sensor, the system is not dependent on repeatable gait patterns, but is rather able to capture pathological motion patterns with low inter-trial repeatability. PMID:29016647

  14. Digital Compositing Techniques for Coronal Imaging (Invited review)

    NASA Astrophysics Data System (ADS)

    Espenak, F.

    2000-04-01

    The solar corona exhibits a huge range in brightness which cannot be captured in any single photographic exposure. Short exposures show the bright inner corona and prominences, while long exposures reveal faint details in equatorial streamers and polar brushes. For many years, radial gradient filters and other analog techniques have been used to compress the corona's dynamic range in order to study its morphology. Such techniques demand perfect pointing and tracking during the eclipse, and can be difficult to calibrate. In the past decade, the speed, memory and hard disk capacity of personal computers have rapidly increased as prices continue to drop. It is now possible to perform sophisticated image processing of eclipse photographs on commercially available CPU's. Software programs such as Adobe Photoshop permit combining multiple eclipse photographs into a composite image which compresses the corona's dynamic range and can reveal subtle features and structures. Algorithms and digital techniques used for processing 1998 eclipse photographs will be discussed which are equally applicable to the recent eclipse of 1999 August 11.

  15. Differentiation of bacterial colonies and temporal growth patterns using hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Mehrübeoglu, Mehrube; Buck, Gregory W.; Livingston, Daniel W.

    2014-09-01

    Detection and identification of bacteria are important for health and safety. Hyperspectral imaging offers the potential to capture unique spectral patterns and spatial information from bacteria which can then be used to detect and differentiate bacterial species. Here, hyperspectral imaging has been used to characterize different bacterial colonies and investigate their growth over time. Six bacterial species (Pseudomonas fluorescens, Escherichia coli, Serratia marcescens, Salmonella enterica, Staphylococcus aureus, Enterobacter aerogenes) were grown on tryptic soy agar plates. Hyperspectral data were acquired immediately after, 24 hours after, and 96 hours after incubation. Spectral signatures from bacterial colonies demonstrated repeatable measurements for five out of six species. Spatial variations as well as changes in spectral signatures were observed across temporal measurements within and among species at multiple wavelengths due to strengthening or weakening reflectance signals from growing bacterial colonies based on their pigmentation. Between-class differences and within-class similarities were the most prominent in hyperspectral data collected 96 hours after incubation.

  16. NASA Tech Briefs, February 2008

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Topics discussed include: Optical Measurement of Mass Flow of a Two-Phase Fluid; Selectable-Tip Corrosion-Testing Electrochemical Cell; Piezoelectric Bolt Breakers and Bolt Fatigue Testers; Improved Measurement of B(sub 22) of Macromolecules in a Flow Cell; Measurements by a Vector Network Analyzer at 325 to 508 GHz; Using Light to Treat Mucositis and Help Wounds Heal; Increasing Discharge Capacities of Li-(CF)(sub n) Cells; Dot-in-Well Quantum-Dot Infrared Photodetectors; Integrated Microbatteries for Implantable Medical Devices; Oxidation Behavior of Carbon Fiber-Reinforced Composites; GIDEP Batching Tool; Generic Spacecraft Model for Real-Time Simulation; Parallel-Processing Software for Creating Mosaic Images; Software for Verifying Image-Correlation Tie Points; Flexcam Image Capture Viewing and Spot Tracking; Low-Pt-Content Anode Catalyst for Direct Methanol Fuel Cells; Graphite/Cyanate Ester Face Sheets for Adaptive Optics; Atomized BaF2-CaF7 for Better-Flowing Plasma-Spray Feedstock; Nanophase Nickel-Zirconium Alloys for Fuel Cells; Vacuum Packaging of MEMS With Multiple Internal Seal Rings; Compact Two-Dimensional Spectrometer Optics; and Fault-Tolerant Coding for State Machines.

  17. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  18. Blackboard architecture for medical image interpretation

    NASA Astrophysics Data System (ADS)

    Davis, Darryl N.; Taylor, Christopher J.

    1991-06-01

    There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.

  19. Using deep learning for content-based medical image retrieval

    NASA Astrophysics Data System (ADS)

    Sun, Qinpei; Yang, Yuanyuan; Sun, Jianyong; Yang, Zhiming; Zhang, Jianguo

    2017-03-01

    Content-Based medical image retrieval (CBMIR) is been highly active research area from past few years. The retrieval performance of a CBMIR system crucially depends on the feature representation, which have been extensively studied by researchers for decades. Although a variety of techniques have been proposed, it remains one of the most challenging problems in current CBMIR research, which is mainly due to the well-known "semantic gap" issue that exists between low-level image pixels captured by machines and high-level semantic concepts perceived by human[1]. Recent years have witnessed some important advances of new techniques in machine learning. One important breakthrough technique is known as "deep learning". Unlike conventional machine learning methods that are often using "shallow" architectures, deep learning mimics the human brain that is organized in a deep architecture and processes information through multiple stages of transformation and representation. This means that we do not need to spend enormous energy to extract features manually. In this presentation, we propose a novel framework which uses deep learning to retrieval the medical image to improve the accuracy and speed of a CBIR in integrated RIS/PACS.

  20. A Novel Imaging Analysis Method for Capturing Pharyngeal Constriction During Swallowing.

    PubMed

    Schwertner, Ryan W; Garand, Kendrea L; Pearson, William G

    2016-01-01

    Videofluoroscopic imaging of swallowing known as the Modified Barium Study (MBS) is the standard of care for assessing swallowing difficulty. While the clinical purpose of this radiographic imaging is to primarily assess aspiration risk, valuable biomechanical data is embedded in these studies. Computational analysis of swallowing mechanics (CASM) is an established research methodology for assessing multiple interactions of swallowing mechanics based on coordinates mapping muscle function including hyolaryngeal movement, pharyngeal shortening, tongue base retraction, and extension of the head and neck, however coordinates characterizing pharyngeal constriction is undeveloped. The aim of this study was to establish a method for locating the superior and middle pharyngeal constrictors using hard landmarks as guides on MBS videofluoroscopic imaging, and to test the reliability of this new method. Twenty de-identified, normal, MBS videos were randomly selected from a database. Two raters annotated landmarks for the superior and middle pharyngeal constrictors frame-by-frame using a semi-automated MATLAB tracker tool at two time points. Intraclass correlation coefficients were used to assess test-retest reliability between two raters with an ICC = 0.99 or greater for all coordinates for the retest measurement. MorphoJ integrated software was used to perform a discriminate function analysis to visualize how all 12 coordinates interact with each other in normal swallowing. The addition of the superior and middle pharyngeal constrictor coordinates to CASM allows for a robust analysis of the multiple components of swallowing mechanics interacting with a wide range of variables in both patient specific and cohort studies derived from common use imaging data.

  1. A Novel Imaging Analysis Method for Capturing Pharyngeal Constriction During Swallowing

    PubMed Central

    Schwertner, Ryan W.; Garand, Kendrea L.; Pearson, William G.

    2016-01-01

    Videofluoroscopic imaging of swallowing known as the Modified Barium Study (MBS) is the standard of care for assessing swallowing difficulty. While the clinical purpose of this radiographic imaging is to primarily assess aspiration risk, valuable biomechanical data is embedded in these studies. Computational analysis of swallowing mechanics (CASM) is an established research methodology for assessing multiple interactions of swallowing mechanics based on coordinates mapping muscle function including hyolaryngeal movement, pharyngeal shortening, tongue base retraction, and extension of the head and neck, however coordinates characterizing pharyngeal constriction is undeveloped. The aim of this study was to establish a method for locating the superior and middle pharyngeal constrictors using hard landmarks as guides on MBS videofluoroscopic imaging, and to test the reliability of this new method. Twenty de-identified, normal, MBS videos were randomly selected from a database. Two raters annotated landmarks for the superior and middle pharyngeal constrictors frame-by-frame using a semi-automated MATLAB tracker tool at two time points. Intraclass correlation coefficients were used to assess test-retest reliability between two raters with an ICC = 0.99 or greater for all coordinates for the retest measurement. MorphoJ integrated software was used to perform a discriminate function analysis to visualize how all 12 coordinates interact with each other in normal swallowing. The addition of the superior and middle pharyngeal constrictor coordinates to CASM allows for a robust analysis of the multiple components of swallowing mechanics interacting with a wide range of variables in both patient specific and cohort studies derived from common use imaging data. PMID:28239682

  2. Effective Fingerprint Quality Estimation for Diverse Capture Sensors

    PubMed Central

    Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun

    2010-01-01

    Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632

  3. Restoration of motion blurred images

    NASA Astrophysics Data System (ADS)

    Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-08-01

    Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.

  4. Sensory mediation of stimulus-driven attentional capture in multiple-cue displays.

    PubMed

    Wright, Richard D; Richard, Christian M

    2003-08-01

    Three location-cuing experiments were conducted in order to examine the stimulus-driven control of attentional capture in multiple-cue displays. These displays consisted of one to four simultaneously presented direct location cues. The results indicated that direct location cuing can produce cue effects that are mediated, in part, by nonattentional processing that occurs simultaneously at multiple locations. When single cues were presented in isolation, however, the resulting cue effect appeared to be due to a combination of sensory processing and attentional capture by the cue. This suggests that the faster responses produced by direct cues may be associated with two different components: an attention-related component that can be modulated by goal-driven factors and a nonattentional component that occurs in parallel at multiple direct-cue locations and is minimally affected by goal-driven factors.

  5. Development of Sorting System for Fishes by Feed-forward Neural Networks Using Rotation Invariant Features

    NASA Astrophysics Data System (ADS)

    Shiraishi, Yuhki; Takeda, Fumiaki

    In this research, we have developed a sorting system for fishes, which is comprised of a conveyance part, a capturing image part, and a sorting part. In the conveyance part, we have developed an independent conveyance system in order to separate one fish from an intertwined group of fishes. After the image of the separated fish is captured in the capturing part, a rotation invariant feature is extracted using two-dimensional fast Fourier transform, which is the mean value of the power spectrum with the same distance from the origin in the spectrum field. After that, the fishes are classified by three-layered feed-forward neural networks. The experimental results show that the developed system classifies three kinds of fishes captured in various angles with the classification ratio of 98.95% for 1044 captured images of five fishes. The other experimental results show the classification ratio of 90.7% for 300 fishes by 10-fold cross validation method.

  6. An improved ASIFT algorithm for indoor panorama image matching

    NASA Astrophysics Data System (ADS)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  7. Measurement of Separated Flow Structures Using a Multiple-Camera DPIV System. [conducted in the Langley Subsonic Basic Research Tunnel

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.; Bartram, Scott M.

    2001-01-01

    A novel multiple-camera system for the recording of digital particle image velocimetry (DPIV) images acquired in a two-dimensional separating/reattaching flow is described. The measurements were performed in the NASA Langley Subsonic Basic Research Tunnel as part of an overall series of experiments involving the simultaneous acquisition of dynamic surface pressures and off-body velocities. The DPIV system utilized two frequency-doubled Nd:YAG lasers to generate two coplanar, orthogonally polarized light sheets directed upstream along the horizontal centerline of the test model. A recording system containing two pairs of matched high resolution, 8-bit cameras was used to separate and capture images of illuminated tracer particles embedded in the flow field. Background image subtraction was used to reduce undesirable flare light emanating from the surface of the model, and custom pixel alignment algorithms were employed to provide accurate registration among the various cameras. Spatial cross correlation analysis with median filter validation was used to determine the instantaneous velocity structure in the separating/reattaching flow region illuminated by the laser light sheets. In operation the DPIV system exhibited a good ability to resolve large-scale separated flow structures with acceptable accuracy over the extended field of view of the cameras. The recording system design provided enhanced performance versus traditional DPIV systems by allowing a variety of standard and non-standard cameras to be easily incorporated into the system.

  8. Novel technologies for assessing dietary intake: evaluating the usability of a mobile telephone food record among adults and adolescents.

    PubMed

    Daugherty, Bethany L; Schap, TusaRebecca E; Ettienne-Gittens, Reynolette; Zhu, Fengqing M; Bosch, Marc; Delp, Edward J; Ebert, David S; Kerr, Deborah A; Boushey, Carol J

    2012-04-13

    The development of a mobile telephone food record has the potential to ameliorate much of the burden associated with current methods of dietary assessment. When using the mobile telephone food record, respondents capture an image of their foods and beverages before and after eating. Methods of image analysis and volume estimation allow for automatic identification and volume estimation of foods. To obtain a suitable image, all foods and beverages and a fiducial marker must be included in the image. To evaluate a defined set of skills among adolescents and adults when using the mobile telephone food record to capture images and to compare the perceptions and preferences between adults and adolescents regarding their use of the mobile telephone food record. We recruited 135 volunteers (78 adolescents, 57 adults) to use the mobile telephone food record for one or two meals under controlled conditions. Volunteers received instruction for using the mobile telephone food record prior to their first meal, captured images of foods and beverages before and after eating, and participated in a feedback session. We used chi-square for comparisons of the set of skills, preferences, and perceptions between the adults and adolescents, and McNemar test for comparisons within the adolescents and adults. Adults were more likely than adolescents to include all foods and beverages in the before and after images, but both age groups had difficulty including the entire fiducial marker. Compared with adolescents, significantly more adults had to capture more than one image before (38% vs 58%, P = .03) and after (25% vs 50%, P = .008) meal session 1 to obtain a suitable image. Despite being less efficient when using the mobile telephone food record, adults were more likely than adolescents to perceive remembering to capture images as easy (P < .001). A majority of both age groups were able to follow the defined set of skills; however, adults were less efficient when using the mobile telephone food record. Additional interactive training will likely be necessary for all users to provide extra practice in capturing images before entering a free-living situation. These results will inform age-specific development of the mobile telephone food record that may translate to a more accurate method of dietary assessment.

  9. How does a newly encountered face become familiar? The effect of within-person variability on adults' and children's perception of identity.

    PubMed

    Baker, Kristen A; Laurence, Sarah; Mondloch, Catherine J

    2017-04-01

    Adults and children aged 6years and older easily recognize multiple images of a familiar face, but often perceive two images of an unfamiliar face as belonging to different identities. Here we examined the process by which a newly encountered face becomes familiar, defined as accurate recognition of multiple images that capture natural within-person variability in appearance. In Experiment 1 we examined whether exposure to within-person variability in appearance helps children learn a new face. Children aged 6-13years watched a 10-min video of a woman reading a story; she was filmed on a single day (low variability) or over three days, across which her appearance and filming conditions (e.g., camera, lighting) varied (high variability). After familiarization, participants sorted a set of images comprising novel images of the target identity intermixed with distractors. Compared to participants who received no familiarization, children showed evidence of learning only in the high-variability condition, in contrast to adults who showed evidence of learning in both the low- and high-variability conditions. Experiment 2 highlighted the efficiency with which adults learn a new face; their accuracy was comparable across training conditions despite variability in duration (1 vs. 10min) and type (video vs. static images) of training. Collectively, our findings show that exposure to variability leads to the formation of a robust representation of facial identity, consistent with perceptual learning in other domains (e.g., language), and that the development of face learning is protracted throughout childhood. We discuss possible underlying mechanisms. Copyright © 2016. Published by Elsevier B.V.

  10. Dispersal of Volcanic Ash on Mars: Ash Grain Shape Analysis

    NASA Astrophysics Data System (ADS)

    Langdalen, Z.; Fagents, S. A.; Fitch, E. P.

    2017-12-01

    Many ash dispersal models use spheres as ash-grain analogs in drag calculations. These simplifications introduce inaccuracies in the treatment of drag coefficients, leading to inaccurate settling velocities and dispersal predictions. Therefore, we are investigating the use of a range of shape parameters, calculated using grain dimensions, to derive a better representation of grain shape and effective grain cross-sectional area. Specifically, our goal is to apply our results to the modeling of ash deposition to investigate the proposed volcanic origin of certain fine-grained deposits on Mars. Therefore, we are documenting the dimensions and shapes of ash grains from terrestrial subplinian to plinian deposits, in eight size divisions from 2 mm to 16 μm, employing a high resolution optical microscope. The optical image capture protocol provides an accurate ash grain outline by taking multiple images at different focus heights prior to combining them into a composite image. Image composite mosaics are then processed through ImageJ, a robust scientific measurement software package, to calculate a range of dimensionless shape parameters. Since ash grains rotate as they fall, drag forces act on a changing cross-sectional area. Therefore, we capture images and calculate shape parameters of each grain positioned in three orthogonal orientations. We find that the difference between maximum and minimum aspect ratios of the three orientations of a given grain best quantifies the degree of elongation of that grain. However, the average aspect ratio calculated for each grain provides a good representation of relative differences among grains. We also find that convexity provides the best representation of surface irregularity. For both shape parameters, natural ash grains display notably different shape parameter values than sphere analogs. Therefore, Mars ash dispersal modeling that incorporates shape parameters will provide more realistic predictions of deposit extents because volcanic ash-grain morphologies differ substantially from simplified geometric shapes.

  11. Method and apparatus to monitor a beam of ionizing radiation

    DOEpatents

    Blackburn, Brandon W.; Chichester, David L.; Watson, Scott M.; Johnson, James T.; Kinlaw, Mathew T.

    2015-06-02

    Methods and apparatus to capture images of fluorescence generated by ionizing radiation and determine a position of a beam of ionizing radiation generating the fluorescence from the captured images. In one embodiment, the fluorescence is the result of ionization and recombination of nitrogen in air.

  12. Technology Tips

    ERIC Educational Resources Information Center

    Mathematics Teacher, 2004

    2004-01-01

    Some inexpensive or free ways that enable to capture and use images in work are mentioned. The first tip demonstrates the methods of using some of the built-in capabilities of the Macintosh and Windows-based PC operating systems, and the second tip describes methods to capture and create images using SnagIt.

  13. Device for wavelength-selective imaging

    DOEpatents

    Frangioni, John V.

    2010-09-14

    An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.

  14. Developing Aesthetically Compelling Visualizations for Documenting and Communicating Alaskan Glacier and Landscape Change

    NASA Astrophysics Data System (ADS)

    Molnia, B. F.

    2016-12-01

    For 50 years I have investigated glacier dynamics and attempted to convey this information to others. Since 2000, my focus has been on capturing and documenting decadal and century-scale Alaskan glacier and landscape change using precision repeat photography and on broadly communicate these results through simple, aesthetically compelling, unambiguous visualizations. As a young geologist, I spent the summer of 1968 on the Juneau Icefield, photographing its surface features and margins. Since then, I have taken 150,000 photographs of Alaskan glaciers and collected 5,000 historical Alaskan photographs taken by other, the earliest dating back to 1883. This database and my passion for photographing glaciers became the basis for an on-going investigation aimed at visually documenting glacier and landscapes change at more than 200 previously photographed Alaskan locations in Glacier Bay and Kenai Fjords National Parks, Prince William Sound, and the Coast Mountains. Repeat photography is a technique in which a historical and a modern photograph, both having similar fields of view, are compared and contrasted to quantitatively and qualitatively determine their similarities and differences. In precision repeat photography, both photographs have the same field of view, ideally being photographed from the identical location. Since 2000, I have conducted nearly 20 field campaigns to systematically revisit and re-photograph more than 225 fields of view previously captured in the historical photographs. As aesthetics are important in successfully communicating what has changed, substantial time and effort is invested in capturing new, comparable, generally cloud free photographs at each revisited site. The resulting modern images are then paired with similar field-of-view historical images to produce compelling, aesthetic photo pairs which depict long-term glacier, landscape, and ecosystem changes. As a few sites have multiple historical images, photo triplets or quadruplets are sometimes possible. Several approaches have been tried to produce aesthetic compelling visualization. These have included sliders, dissolves, adjacent pairs, a website, and DVDs. Providing high resolution pairs to users and letting them adapt the images to their individual needs has also been very successful.

  15. Improved resistivity imaging of groundwater solute plumes using POD-based inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.; Khan, T.

    2012-12-01

    We propose a new approach for enforcing physics-based regularization in electrical resistivity imaging (ERI) problems. The approach utilizes a basis-constrained inversion where an optimal set of basis vectors is extracted from training data by Proper Orthogonal Decomposition (POD). The key aspect of the approach is that Monte Carlo simulation of flow and transport is used to generate a training dataset, thereby intrinsically capturing the physics of the underlying flow and transport models in a non-parametric form. POD allows for these training data to be projected onto a subspace of the original domain, resulting in the extraction of a basis for the inversion that captures characteristics of the groundwater flow and transport system, while simultaneously allowing for dimensionality reduction of the original problem in the projected space We use two different synthetic transport scenarios in heterogeneous media to illustrate how the POD-based inversion compares with standard Tikhonov and coupled inversion. The first scenario had a single source zone leading to a unimodal solute plume (synthetic #1), whereas, the second scenario had two source zones that produced a bimodal plume (synthetic #2). For both coupled inversion and the POD approach, the conceptual flow and transport model used considered only a single source zone for both scenarios. Results were compared based on multiple metrics (concentration root-mean square error (RMSE), peak concentration, and total solute mass). In addition, results for POD inversion based on 3 different data densities (120, 300, and 560 data points) and varying number of selected basis images (100, 300, and 500) were compared. For synthetic #1, we found that all three methods provided qualitatively reasonable reproduction of the true plume. Quantitatively, the POD inversion performed best overall for each metric considered. Moreover, since synthetic #1 was consistent with the conceptual transport model, a small number of basis vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.

  16. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  17. Optical cell monitoring system for underwater targets

    NASA Astrophysics Data System (ADS)

    Moon, SangJun; Manzur, Fahim; Manzur, Tariq; Demirci, Utkan

    2008-10-01

    We demonstrate a cell based detection system that could be used for monitoring an underwater target volume and environment using a microfluidic chip and charge-coupled-device (CCD). This technique allows us to capture specific cells and enumerate these cells on a large area on a microchip. The microfluidic chip and a lens-less imaging platform were then merged to monitor cell populations and morphologies as a system that may find use in distributed sensor networks. The chip, featuring surface chemistry and automatic cell imaging, was fabricated from a cover glass slide, double sided adhesive film and a transparent Polymethlymetacrylate (PMMA) slab. The optically clear chip allows detecting cells with a CCD sensor. These chips were fabricated with a laser cutter without the use of photolithography. We utilized CD4+ cells that are captured on the floor of a microfluidic chip due to the ability to address specific target cells using antibody-antigen binding. Captured CD4+ cells were imaged with a fluorescence microscope to verify the chip specificity and efficiency. We achieved 70.2 +/- 6.5% capturing efficiency and 88.8 +/- 5.4% specificity for CD4+ T lymphocytes (n = 9 devices). Bright field images of the captured cells in the 24 mm × 4 mm × 50 μm microfluidic chip were obtained with the CCD sensor in one second. We achieved an inexpensive system that rapidly captures cells and images them using a lens-less CCD system. This microfluidic device can be modified for use in single cell detection utilizing a cheap light-emitting diode (LED) chip instead of a wide range CCD system.

  18. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  19. Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection

    NASA Astrophysics Data System (ADS)

    Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2009-03-01

    Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

  20. Feasibility and Use of the Mobile Food Record for Capturing Eating Occasions among Children Ages 3-10 Years in Guam.

    PubMed

    Aflague, Tanisha F; Boushey, Carol J; Guerrero, Rachael T Leon; Ahmad, Ziad; Kerr, Deborah A; Delp, Edward J

    2015-06-02

    Children's readiness to use technology supports the idea of children using mobile applications for dietary assessment. Our goal was to determine if children 3-10 years could successfully use the mobile food record (mFR) to capture a usable image pair or pairs. Children in Sample 1 were tasked to use the mFR to capture an image pair of one eating occasion while attending summer camp. For Sample 2, children were tasked to record all eating occasions for two consecutive days at two time periods that were two to four weeks apart. Trained analysts evaluated images. In Sample 1, 90% (57/63) captured one usable image pair. All children (63/63) returned the mFR undamaged. Sixty-two children reported: The mFR was easy to use (89%); willingness to use the mFR again (87%); and the fiducial marker easy to manage (94%). Children in Sample 2 used the mFR at least one day at Time 1 (59/63, 94%); Time 2 (49/63, 78%); and at both times (47/63, 75%). This latter group captured 6.21 ± 4.65 and 5.65 ± 3.26 mean (± SD) image pairs for Time 1 and Time 2, respectively. Results support the potential for children to independently record dietary intakes using the mFR.

  1. A 12-bit high-speed column-parallel two-step single-slope analog-to-digital converter (ADC) for CMOS image sensors.

    PubMed

    Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao

    2014-11-17

    A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.

  2. Portable LED-induced autofluorescence spectroscopy for oral cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Yan, Yung-Jhe; Huang, Ting-Wei; Cheng, Nai-Lun; Hsieh, Yao-Fang; Tsai, Ming-Hsui; Chiou, Jin-Chern; Duann, Jeng-Ren; Lin, Yung-Jiun; Yang, Chin-Siang; Ou-Yang, Mang

    2017-04-01

    Oral cancer is a serious and growing problem in many developing and developed countries. To improve the cancer screening procedure, we developed a portable light-emitting-diode (LED)-induced autofluorescence (LIAF) imager that contains two wavelength LED excitation light sources and multiple filters to capture ex vivo oral tissue autofluorescence images. Compared with conventional means of oral cancer diagnosis, the LIAF imager is a handier, faster, and more highly reliable solution. The compact design with a tiny probe allows clinicians to easily observe autofluorescence images of hidden areas located in concave deep oral cavities. The ex vivo trials conducted in Taiwan present the design and prototype of the portable LIAF imager used for analyzing 31 patients with 221 measurement points. Using the normalized factor of normal tissues under the excitation source with 365 nm of the central wavelength and without the bandpass filter, the results revealed that the sensitivity was larger than 84%, the specificity was not smaller than over 76%, the accuracy was about 80%, and the area under curve of the receiver operating characteristic (ROC) was achieved at about 87%, respectively. The fact shows the LIAF spectroscopy has the possibilities of ex vivo diagnosis and noninvasive examinations for oral cancer.

  3. Hadamard multimode optical imaging transceiver

    DOEpatents

    Cooke, Bradly J; Guenther, David C; Tiee, Joe J; Kellum, Mervyn J; Olivas, Nicholas L; Weisse-Bernstein, Nina R; Judd, Stephen L; Braun, Thomas R

    2012-10-30

    Disclosed is a method and system for simultaneously acquiring and producing results for multiple image modes using a common sensor without optical filtering, scanning, or other moving parts. The system and method utilize the Walsh-Hadamard correlation detection process (e.g., functions/matrix) to provide an all-binary structure that permits seamless bridging between analog and digital domains. An embodiment may capture an incoming optical signal at an optical aperture, convert the optical signal to an electrical signal, pass the electrical signal through a Low-Noise Amplifier (LNA) to create an LNA signal, pass the LNA signal through one or more correlators where each correlator has a corresponding Walsh-Hadamard (WH) binary basis function, calculate a correlation output coefficient for each correlator as a function of the corresponding WH binary basis function in accordance with Walsh-Hadamard mathematical principles, digitize each of the correlation output coefficient by passing each correlation output coefficient through an Analog-to-Digital Converter (ADC), and performing image mode processing on the digitized correlation output coefficients as desired to produce one or more image modes. Some, but not all, potential image modes include: multi-channel access, temporal, range, three-dimensional, and synthetic aperture.

  4. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  5. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images

    PubMed Central

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-01-01

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596

  6. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.

    PubMed

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-12-12

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.

  7. Beyond excitation/inhibition imbalance in multidimensional models of neural circuit changes in brain disorders.

    PubMed

    O'Donnell, Cian; Gonçalves, J Tiago; Portera-Cailliau, Carlos; Sejnowski, Terrence J

    2017-10-11

    A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca 2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits.

  8. Beyond excitation/inhibition imbalance in multidimensional models of neural circuit changes in brain disorders

    PubMed Central

    Gonçalves, J Tiago; Portera-Cailliau, Carlos

    2017-01-01

    A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits. PMID:29019321

  9. Micro-scale thermal imaging of CO2 absorption in the thermochemical energy storage of Li metal oxides at high temperature

    NASA Astrophysics Data System (ADS)

    Morikawa, Junko; Takasu, Hiroki; Zamengo, Massimiliano; Kato, Yukitaka

    2017-05-01

    Li-Metal oxides (typical example: lithium ortho-silicate Li4SiO4) are regarded as a novel solid carbon dioxide CO2 absorbent accompanied by an exothermic reaction. At temperatures above 700°C the sorbent is regenerated with the release of the captured CO2 in an endothermic reaction. As the reaction equilibrium of this reversible chemical reaction is controllable only by the partial pressure of CO2, the system is regarded as a potential candidate for chemical heat storage at high temperatures. In this study, we applied our recent developed mobile type instrumentation of micro-scale infrared thermal imaging system to observe the heat of chemical reaction of Li4SiO4 and CO2 at temperature higher than 600°C or higher. In order to quantify the micro-scale heat transfer and heat exchange in the chemical reaction, the superimpose signal processing system is setup to determine the precise temperature. Under an ambient flow of carbon dioxide, a powder of Li4SiO4 with a diameter 50 micron started to shine caused by an exothermic chemical reaction heat above 600°C. The phenomena was accelerated with increasing temperature up to 700°C. At the same time, the reaction product lithium carbonate (Li2CO3) started to melt with endothermic phase change above 700°C, and these thermal behaviors were captured by the method of thermal imaging. The direct measurement of multiple thermal phenomena at high temperatures is significant to promote an efficient design of chemical heat storage materials. This is the first observation of the exothermic heat of the reaction of Li4SiO4 and CO2 at around 700°C by the thermal imaging method.

  10. Quantifying efficacy and limits of unmanned aerial vehicle (UAV) technology for weed seedling detection as affected by sensor resolution.

    PubMed

    Peña, José M; Torres-Sánchez, Jorge; Serrano-Pérez, Angélica; de Castro, Ana I; López-Granados, Francisca

    2015-03-06

    In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5-6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations.

  11. Quantifying Efficacy and Limits of Unmanned Aerial Vehicle (UAV) Technology for Weed Seedling Detection as Affected by Sensor Resolution

    PubMed Central

    Peña, José M.; Torres-Sánchez, Jorge; Serrano-Pérez, Angélica; de Castro, Ana I.; López-Granados, Francisca

    2015-01-01

    In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5–6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations. PMID:25756867

  12. SUMMIT (Serially Unified Multicenter Multiple Sclerosis Investigation): creating a repository of deeply phenotyped contemporary multiple sclerosis cohorts.

    PubMed

    Bove, Riley; Chitnis, Tanuja; Cree, Bruce Ac; Tintoré, Mar; Naegelin, Yvonne; Uitdehaag, Bernard Mj; Kappos, Ludwig; Khoury, Samia J; Montalban, Xavier; Hauser, Stephen L; Weiner, Howard L

    2017-08-01

    There is a pressing need for robust longitudinal cohort studies in the modern treatment era of multiple sclerosis. Build a multiple sclerosis (MS) cohort repository to capture the variability of disability accumulation, as well as provide the depth of characterization (clinical, radiologic, genetic, biospecimens) required to adequately model and ultimately predict a patient's course. Serially Unified Multicenter Multiple Sclerosis Investigation (SUMMIT) is an international multi-center, prospectively enrolled cohort with over a decade of comprehensive follow-up on more than 1000 patients from two large North American academic MS Centers (Brigham and Women's Hospital (Comprehensive Longitudinal Investigation of Multiple Sclerosis at the Brigham and Women's Hospital (CLIMB; BWH)) and University of California, San Francisco (Expression/genomics, Proteomics, Imaging, and Clinical (EPIC))). It is bringing online more than 2500 patients from additional international MS Centers (Basel (Universitätsspital Basel (UHB)), VU University Medical Center MS Center Amsterdam (MSCA), Multiple Sclerosis Center of Catalonia-Vall d'Hebron Hospital (Barcelona clinically isolated syndrome (CIS) cohort), and American University of Beirut Medical Center (AUBMC-Multiple Sclerosis Interdisciplinary Research (AMIR)). We provide evidence for harmonization of two of the initial cohorts in terms of the characterization of demographics, disease, and treatment-related variables; demonstrate several proof-of-principle analyses examining genetic and radiologic predictors of disease progression; and discuss the steps involved in expanding SUMMIT into a repository accessible to the broader scientific community.

  13. Historic Methods for Capturing Magnetic Field Images

    ERIC Educational Resources Information Center

    Kwan, Alistair

    2016-01-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…

  14. Techniques of noninvasive optical tomographic imaging

    NASA Astrophysics Data System (ADS)

    Rosen, Joseph; Abookasis, David; Gokhler, Mark

    2006-01-01

    Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.

  15. A 3D camera for improved facial recognition

    NASA Astrophysics Data System (ADS)

    Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim

    2004-12-01

    We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.

  16. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  17. The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance.

    PubMed

    Proença, Hugo; Filipe, Sílvio; Santos, Ricardo; Oliveira, João; Alexandre, Luís A

    2010-08-01

    The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.

  18. A photonic crystal hydrogel suspension array for the capture of blood cells from whole blood

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Cai, Yunlang; Shang, Luoran; Wang, Huan; Cheng, Yao; Rong, Fei; Gu, Zhongze; Zhao, Yuanjin

    2016-02-01

    Diagnosing hematological disorders based on the separation and detection of cells in the patient's blood is a significant challenge. We have developed a novel barcode particle-based suspension array that can simultaneously capture and detect multiple types of blood cells. The barcode particles are polyacrylamide (PAAm) hydrogel inverse opal microcarriers with characteristic reflection peak codes that remain stable during cell capture on their surfaces. The hydrophilic PAAm hydrogel scaffolds of the barcode particles can entrap various plasma proteins to capture different cells in the blood, with little damage to captured cells.Diagnosing hematological disorders based on the separation and detection of cells in the patient's blood is a significant challenge. We have developed a novel barcode particle-based suspension array that can simultaneously capture and detect multiple types of blood cells. The barcode particles are polyacrylamide (PAAm) hydrogel inverse opal microcarriers with characteristic reflection peak codes that remain stable during cell capture on their surfaces. The hydrophilic PAAm hydrogel scaffolds of the barcode particles can entrap various plasma proteins to capture different cells in the blood, with little damage to captured cells. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06368j

  19. Memory-Based Attention Capture when Multiple Items Are Maintained in Visual Working Memory

    PubMed Central

    Hollingworth, Andrew; Beck, Valerie M.

    2016-01-01

    Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search—an index of VWM guidance—is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when two colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. PMID:27123681

  20. The Art in Visualizing Natural Landscapes from Space

    NASA Astrophysics Data System (ADS)

    Webley, P. W.; Shipman, J. S.; Adams, T.

    2017-12-01

    Satellite remote sensing data can capture the changing Earth at cm resolution, across hundreds of spectral channels, and multiple times per hour. There is an art in combining these datasets together to fully capture the beauty of our planet. The resulting artistic piece can be further transformed by building in an accompanying musical score, allowing for a deeper emotional connection with the public. We make use of visible, near, middle and long wave infrared and radar data as well as different remote sensing techniques to uniquely capture our changing landscape in the spaceborne data. We will generate visually compelling imagery and videos that represent hazardous events from dust storms to landslides and from volcanic eruptions to forest fires. We will demonstrate how specific features of the Earth's landscape can be emphasized through the use of different datasets and color combinations and how, by adding a musical score, we can directly connect with the viewer and heighten their experience. We will also discuss our process to integrate the different aspects of our project together and how it could be developed to capture the beauty of other planets across the solar system using spaceborne imagery and data. Bringing together experts in art installations, composing musical scores, and remote sensing image visualization can lead to new and exciting artistic representations of geoscience data. The resulting product demonstrates there is an art to visualizing remote sensing data to capture the beauty of our planet and that incorporating a musical score can take us all to new places and emotions to enhance our experience.

  1. Automated complete slide digitization: a medium for simultaneous viewing by multiple pathologists.

    PubMed

    Leong, F J; McGee, J O

    2001-11-01

    Developments in telepathology robotic systems have evolved the concept of a 'virtual microscope' handling 'digital slides'. Slide digitization is a method of archiving salient histological features in numerical (digital) form. The value and potential of this have begun to be recognized by several international centres. Automated complete slide digitization has application at all levels of clinical practice and will benefit undergraduate, postgraduate, and continuing education. Unfortunately, as the volume of potential data on a histological slide represents a significant problem in terms of digitization, storage, and subsequent manipulation, the reality of virtual microscopy to date has comprised limited views at inadequate resolution. This paper outlines a system refined in the authors' laboratory, which employs a combination of enhanced hardware, image capture, and processing techniques designed for telepathology. The system is able to scan an entire slide at high magnification and create a library of such slides that may exist on an internet server or be distributed on removable media (such as CD-ROM or DVD). A digital slide allows image data manipulation at a level not possible with conventional light microscopy. Combinations of multiple users, multiple magnifications, annotations, and addition of ancillary textual and visual data are now possible. This demonstrates that with increased sophistication, the applications of telepathology technology need not be confined to second opinion, but can be extended on a wider front. Copyright 2001 John Wiley & Sons, Ltd.

  2. Collaborative sparse priors for multi-view ATR

    NASA Astrophysics Data System (ADS)

    Li, Xuelu; Monga, Vishal

    2018-04-01

    Recent work has seen a surge of sparse representation based classification (SRC) methods applied to automatic target recognition problems. While traditional SRC approaches used l0 or l1 norm to quantify sparsity, spike and slab priors have established themselves as the gold standard for providing general tunable sparse structures on vectors. In this work, we employ collaborative spike and slab priors that can be applied to matrices to encourage sparsity for the problem of multi-view ATR. That is, target images captured from multiple views are expanded in terms of a training dictionary multiplied with a coefficient matrix. Ideally, for a test image set comprising of multiple views of a target, coefficients corresponding to its identifying class are expected to be active, while others should be zero, i.e. the coefficient matrix is naturally sparse. We develop a new approach to solve the optimization problem that estimates the sparse coefficient matrix jointly with the sparsity inducing parameters in the collaborative prior. ATR problems are investigated on the mid-wave infrared (MWIR) database made available by the US Army Night Vision and Electronic Sensors Directorate, which has a rich collection of views. Experimental results show that the proposed joint prior and coefficient estimation method (JPCEM) can: 1.) enable improved accuracy when multiple views vs. a single one are invoked, and 2.) outperform state of the art alternatives particularly when training imagery is limited.

  3. Project Report: Reducing Color Rivalry in Imagery for Conjugated Multiple Bandpass Filter Based Stereo Endoscopy

    NASA Technical Reports Server (NTRS)

    Ream, Allen

    2011-01-01

    A pair of conjugated multiple bandpass filters (CMBF) can be used to create spatially separated pupils in a traditional lens and imaging sensor system allowing for the passive capture of stereo video. This method is especially useful for surgical endoscopy where smaller cameras are needed to provide ample room for manipulating tools while also granting improved visualizations of scene depth. The significant issue in this process is that, due to the complimentary nature of the filters, the colors seen through each filter do not match each other, and also differ from colors as seen under a white illumination source. A color correction model was implemented that included optimized filter selection, such that the degree of necessary post-processing correction was minimized, and a chromatic adaptation transformation that attempted to fix the imaged colors tristimulus indices based on the principle of color constancy. Due to fabrication constraints, only dual bandpass filters were feasible. The theoretical average color error after correction between these filters was still above the fusion limit meaning that rivalry conditions are possible during viewing. This error can be minimized further by designing the filters for a subset of colors corresponding to specific working environments.

  4. In the making: SA-PIV applied to swimming practice

    NASA Astrophysics Data System (ADS)

    van Houwelingen, Josje; van de Water, Willem; Kunnen, Rudie; van Heijst, Gertjan; Clercx, Herman

    2017-11-01

    To understand and optimize the propulsion in human swimming, a deep understanding of the hydrodynamics of swimming is required. This is usually based on experiments and numerical simulations under laboratory conditions.. In this study, we bring basic fluid mechanics knowledge and experimental measurement techniques to analyze the flow towards the swimming practice itself. A flow visualization setup is build and placed in a regular swimming pool. The measurement volume contains five homogeneous air bubble curtains illuminated by ambient light. The bubbles in these curtains act as tracer particles. The bubble motion is captured by six cameras placed in the side wall of the pool. It is intended to apply SA-PIV (synthetic aperture PIV) for analyzing the flow structures on multiple planes in the measurement volume. The system has been calibrated and the calibration data are used to refocus on the planes of interest. Multiple preprocessing steps need to be executed to obtain the proper quality of images before applying PIV. With a specially programmed video card to process and analyze the images in real-time feedback about swimming performance will become possible. We report on the first experimental data obtained by this system.

  5. Multi-camera volumetric PIV for the study of jumping fish

    NASA Astrophysics Data System (ADS)

    Mendelson, Leah; Techet, Alexandra H.

    2018-01-01

    Archer fish accurately jump multiple body lengths for aerial prey from directly below the free surface. Multiple fins provide combinations of propulsion and stabilization, enabling prey capture success. Volumetric flow field measurements are crucial to characterizing multi-propulsor interactions during this highly three-dimensional maneuver; however, the fish's behavior also drives unique experimental constraints. Measurements must be obtained in close proximity to the water's surface and in regions of the flow field which are partially-occluded by the fish body. Aerial jump trajectories must also be known to assess performance. This article describes experiment setup and processing modifications to the three-dimensional synthetic aperture particle image velocimetry (SAPIV) technique to address these challenges and facilitate experimental measurements on live jumping fish. The performance of traditional SAPIV algorithms in partially-occluded regions is characterized, and an improved non-iterative reconstruction routine for SAPIV around bodies is introduced. This reconstruction procedure is combined with three-dimensional imaging on both sides of the free surface to reveal the fish's three-dimensional wake, including a series of propulsive vortex rings generated by the tail. In addition, wake measurements from the anal and dorsal fins indicate their stabilizing and thrust-producing contributions as the archer fish jumps.

  6. Computer-assisted sperm analysis (CASA): capabilities and potential developments.

    PubMed

    Amann, Rupert P; Waberski, Dagmar

    2014-01-01

    Computer-assisted sperm analysis (CASA) systems have evolved over approximately 40 years, through advances in devices to capture the image from a microscope, huge increases in computational power concurrent with amazing reduction in size of computers, new computer languages, and updated/expanded software algorithms. Remarkably, basic concepts for identifying sperm and their motion patterns are little changed. Older and slower systems remain in use. Most major spermatology laboratories and semen processing facilities have a CASA system, but the extent of reliance thereon ranges widely. This review describes capabilities and limitations of present CASA technology used with boar, bull, and stallion sperm, followed by possible future developments. Each marketed system is different. Modern CASA systems can automatically view multiple fields in a shallow specimen chamber to capture strobe-like images of 500 to >2000 sperm, at 50 or 60 frames per second, in clear or complex extenders, and in <2 minutes, store information for ≥ 30 frames and provide summary data for each spermatozoon and the population. A few systems evaluate sperm morphology concurrent with motion. CASA cannot accurately predict 'fertility' that will be obtained with a semen sample or subject. However, when carefully validated, current CASA systems provide information important for quality assurance of semen planned for marketing, and for the understanding of the diversity of sperm responses to changes in the microenvironment in research. The four take-home messages from this review are: (1) animal species, extender or medium, specimen chamber, intensity of illumination, imaging hardware and software, instrument settings, technician, etc., all affect accuracy and precision of output values; (2) semen production facilities probably do not need a substantially different CASA system whereas biology laboratories would benefit from systems capable of imaging and tracking sperm in deep chambers for a flexible period of time; (3) software should enable grouping of individual sperm based on one or more attributes so outputs reflect subpopulations or clusters of similar sperm with unique properties; means or medians for the total population are insufficient; and (4) a field-use, portable CASA system for measuring one motion and two or three morphology attributes of individual sperm is needed for field theriogenologists or andrologists working with human sperm outside urban centers; appropriate hardware to capture images and process data apparently are available. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    PubMed

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p < 0.05). Further, the interaction of methods versus fiducial markers also showed significant difference (p < 0.05). The CBCT and facial moulage method showed the greatest accuracy. 3DP models fabricated using 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.

  8. Ultrahigh-frame CCD imagers

    NASA Astrophysics Data System (ADS)

    Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.

    2004-02-01

    This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.

  9. Comparison of three-dimensional surface-imaging systems.

    PubMed

    Tzou, Chieh-Han John; Artner, Nicole M; Pona, Igor; Hold, Alina; Placheta, Eva; Kropatsch, Walter G; Frey, Manfred

    2014-04-01

    In recent decades, three-dimensional (3D) surface-imaging technologies have gained popularity worldwide, but because most published articles that mention them are technical, clinicians often have difficulties gaining a proper understanding of them. This article aims to provide the reader with relevant information on 3D surface-imaging systems. In it, we compare the most recent technologies to reveal their differences. We have accessed five international companies with the latest technologies in 3D surface-imaging systems: 3dMD, Axisthree, Canfield, Crisalix and Dimensional Imaging (Di3D; in alphabetical order). We evaluated their technical equipment, independent validation studies and corporate backgrounds. The fastest capturing devices are the 3dMD and Di3D systems, capable of capturing images within 1.5 and 1 ms, respectively. All companies provide software for tissue modifications. Additionally, 3dMD, Canfield and Di3D can fuse computed tomography (CT)/cone-beam computed tomography (CBCT) images into their 3D surface-imaging data. 3dMD and Di3D provide 4D capture systems, which allow capturing the movement of a 3D surface over time. Crisalix greatly differs from the other four systems as it is purely web based and realised via cloud computing. 3D surface-imaging systems are becoming important in today's plastic surgical set-ups, taking surgeons to a new level of communication with patients, surgical planning and outcome evaluation. Technologies used in 3D surface-imaging systems and their intended field of application vary within the companies evaluated. Potential users should define their requirements and assignment of 3D surface-imaging systems in their clinical as research environment before making the final decision for purchase. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. 40 CFR 63.3546 - How do I establish the emission capture system and add-on control device operating limits during...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... capture system and add-on control device operating limits during the performance test? 63.3546 Section 63... of key parameters of the valve operating system (e.g., solenoid valve operation, air pressure... minimum operating limit for that specific capture device or system of multiple capture devices. The...

  11. 40 CFR 63.3546 - How do I establish the emission capture system and add-on control device operating limits during...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... capture system and add-on control device operating limits during the performance test? 63.3546 Section 63... of key parameters of the valve operating system (e.g., solenoid valve operation, air pressure... minimum operating limit for that specific capture device or system of multiple capture devices. The...

  12. The application of a low-cost 3D depth camera for patient set-up and respiratory motion management in radiotherapy

    NASA Astrophysics Data System (ADS)

    Tahavori, Fatemeh

    Respiratory motion induces uncertainty in External Beam Radiotherapy (EBRT), which can result in sub-optimal dose delivery to the target tissue and unwanted dose to normal tissue. The conventional approach to managing patient respiratory motion for EBRT within the area of abdominal-thoracic cancer is through the use of internal radiological imaging methods (e.g. Megavoltage imaging or Cone-Beam Computed Tomography) or via surrogate estimates of tumour position using external markers placed on the patient chest. This latter method uses tracking with video-based techniques, and relies on an assumed correlation or mathematical model, between the external surrogate signal and the internal target position. The marker's trajectory can be used in both respiratory gating techniques and real-time tracking methods. Internal radiological imaging methods bring with them limited temporal resolution, and additional radiation burden, which can be addressed by external marker-based methods that carry no such issues. Moreover, by including multiple external markers and placing them closer to the internal target organs, the effciency of correlation algorithms can be increased. However, the quality of such external monitoring methods is underpinned by the performance of the associated correlation model. Therefore, several new approaches to correlation modelling have been developed as part of this thesis and compared using publicly-available datasets. Highly competitive results have been obtained when compared against state-of-the-art methods. Marker-based methods also have the disadvantages of requiring manual set-up time for marker placement and patient positioning and potential issues with reproducibility of marker placement. This motivates the investigation of non-contact marker-free methods for use in EBRT, which is the main topic of this thesis. The Microsoft Kinect is used as an example of a low-cost consumer grade 3D depth camera for capturing and analysing external respiratory motion. This thesis makes the first presentation of detailed studies of external respiratory motion captured using such low-cost technology and demonstrates its potential in a healthcare environment. Firstly, the fundamental performance of a range of Microsoft Kinect sensors is assessed for use in radiotherapy (and potentially other healthcare applications), in terms of static and dynamic performance using both phantoms and volunteers. Then external respiratory motion is captured using the above technology from a group of 32 healthy volunteers and Principal Component Analysis (PCA) is applied to a region of interest encompassing the complete anterior surface to demonstrate breathing style. This work demonstrates that this surface motion can be compactly described by the first two PCA eigenvectors. The reproducibility of subject-specific EBRT set-up using conventional laser-based alignment and marker-based Deep Inspiration Breath Hold (DIBH) methods are also studied using the Microsoft Kinect sensor. A cohort of five healthy female volunteers is repeatedly set-up for left-sided breast cancer EBRT and multiple DIBH episodes captured over five separate sessions representing multiple fractionated radiotherapy treatment sessions, but without dose delivery. This provided an independent assessment that subjects were set-up and generally achieved variations within currently accepted margins of clinical practice. Moreover, this work demonstrated the potential role of consumer-grade 3D depth camera technology as a possible replacement for marker based set-up and DIBH management procedures. This brings with it the additional benefits of low cost, and potential through-put benefits, as patient set-up could ultimately be fully automated with this technology, and DIBH could be independently monitored without requiring preparatory manual intervention.

  13. Capturing and stitching images with a large viewing angle and low distortion properties for upper gastrointestinal endoscopy

    NASA Astrophysics Data System (ADS)

    Liu, Ya-Cheng; Chung, Chien-Kai; Lai, Jyun-Yi; Chang, Han-Chao; Hsu, Feng-Yi

    2013-06-01

    Upper gastrointestinal endoscopies are primarily performed to observe the pathologies of the esophagus, stomach, and duodenum. However, when an endoscope is pushed into the esophagus or stomach by the physician, the organs behave similar to a balloon being gradually inflated. Consequently, their shapes and depth-of-field of images change continually, preventing thorough examination of the inflammation or anabrosis position, which delays the curing period. In this study, a 2.9-mm image-capturing module and a convoluted mechanism was incorporated into the tube like a standard 10- mm upper gastrointestinal endoscope. The scale-invariant feature transform (SIFT) algorithm was adopted to implement disease feature extraction on a koala doll. Following feature extraction, the smoothly varying affine stitching (SVAS) method was employed to resolve stitching distortion problems. Subsequently, the real-time splice software developed in this study was embedded in an upper gastrointestinal endoscope to obtain a panoramic view of stomach inflammation in the captured images. The results showed that the 2.9-mm image-capturing module can provide approximately 50 verified images in one spin cycle, a viewing angle of 120° can be attained, and less than 10% distortion can be achieved in each image. Therefore, these methods can solve the problems encountered when using a standard 10-mm upper gastrointestinal endoscope with a single camera, such as image distortion, and partial inflammation displays. The results also showed that the SIFT algorithm provides the highest correct matching rate, and the SVAS method can be employed to resolve the parallax problems caused by stitching together images of different flat surfaces.

  14. Building quantitative, three-dimensional atlases of gene expression and morphology at cellular resolution.

    PubMed

    Knowles, David W; Biggin, Mark D

    2013-01-01

    Animals comprise dynamic three-dimensional arrays of cells that express gene products in intricate spatial and temporal patterns that determine cellular differentiation and morphogenesis. A rigorous understanding of these developmental processes requires automated methods that quantitatively record and analyze complex morphologies and their associated patterns of gene expression at cellular resolution. Here we summarize light microscopy-based approaches to establish permanent, quantitative datasets-atlases-that record this information. We focus on experiments that capture data for whole embryos or large areas of tissue in three dimensions, often at multiple time points. We compare and contrast the advantages and limitations of different methods and highlight some of the discoveries made. We emphasize the need for interdisciplinary collaborations and integrated experimental pipelines that link sample preparation, image acquisition, image analysis, database design, visualization, and quantitative analysis. Copyright © 2013 Wiley Periodicals, Inc.

  15. Fires in Central Africa

    NASA Image and Video Library

    2017-12-08

    Widespread agricultural burning continues throughout central Africa. Smoke and fires in several countries were seen by the Suomi NPP satellite. Most of the fires were burning in the southern region of the Democratic Republic of the Congo, Tanzania, Zambia and Angola. NASA-NOAA's Suomi NPP satellite's Visible Infrared Imaging Radiometer Suite (VIIRS) instrument captured a look at multiple fires and smoke on August 1 at 7:55 a.m. EDT (11:55 UTC). Actively burning areas, detected by VIIRS are outlined in red. Credit: NASA/Jeff Schmaltz/NASA Goddard Rapid Response Team NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  16. Integrating musculoskeletal sonography into rehabilitation: Therapists’ experiences with training and implementation

    PubMed Central

    Gray, Julie McLaughlin; Frank, Gelya; Roll, Shawn C.

    2018-01-01

    Musculoskeletal sonography is rapidly extending beyond radiology; however, best practices for successful integration into new practice contexts are unknown. This study explored non-physician experiences with the processes of training and integration of musculoskeletal sonography into rehabilitation. Qualitative data were captured through multiple sources and iterative thematic analysis was used to describe two occupational therapists’ experiences. The dominant emerging theme was competency, in three domains: technical, procedural and analytical. Additionally, three practice considerations were illuminated: (1) understanding imaging within the dynamics of rehabilitation, (2) navigating nuances of interprofessional care, and (3) implications for post-professional training. Findings indicate that sonography training for rehabilitation providers requires multi-level competency development and consideration of practice complexities. These data lay a foundation on which to explore and develop best practices for incorporating sonographic imaging into the clinic as a means for engaging clients as active participants in the rehabilitation process to improve health and rehabilitation outcomes. PMID:28830315

  17. Spectral-spatial classification of hyperspectral image using three-dimensional convolution network

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu

    2018-01-01

    Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.

  18. Watching the action unfold: New cryo-EM images capture CRISPR’s interaction with target DNA | Center for Cancer Research

    Cancer.gov

    Using the Nobel-prize winning technique of cryo-EM, researchers led by CCR Senior Investigator Sriram Subramaniam, Ph.D., have captured a series of highly detailed images of a protein complex belonging to the CRISPR system that can be used by bacteria to recognize and destroy foreign DNA. The images reveal the molecule’s form before and after its interaction with DNA and help

  19. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

    PubMed Central

    Beijbom, Oscar; Edmunds, Peter J.; Roelfsema, Chris; Smith, Jennifer; Kline, David I.; Neal, Benjamin P.; Dunlap, Matthew J.; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B. Greg; Kriegman, David

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157

  20. High-dynamic range imaging techniques based on both color-separation algorithms used in conventional graphic arts and the human visual perception modeling

    NASA Astrophysics Data System (ADS)

    Lo, Mei-Chun; Hsieh, Tsung-Hsien; Perng, Ruey-Kuen; Chen, Jiong-Qiao

    2010-01-01

    The aim of this research is to derive illuminant-independent type of HDR imaging modules which can optimally multispectrally reconstruct of every color concerned in high-dynamic-range of original images for preferable cross-media color reproduction applications. Each module, based on either of broadband and multispectral approach, would be incorporated models of perceptual HDR tone-mapping, device characterization. In this study, an xvYCC format of HDR digital camera was used to capture HDR scene images for test. A tone-mapping module was derived based on a multiscale representation of the human visual system and used equations similar to a photoreceptor adaptation equation, proposed by Michaelis-Menten. Additionally, an adaptive bilateral type of gamut mapping algorithm, using approach of a multiple conversing-points (previously derived), was incorporated with or without adaptive Un-sharp Masking (USM) to carry out the optimization of HDR image rendering. An LCD with standard color space of Adobe RGB (D65) was used as a soft-proofing platform to display/represent HDR original RGB images, and also evaluate both renditionquality and prediction-performance of modules derived. Also, another LCD with standard color space of sRGB was used to test gamut-mapping algorithms, used to be integrated with tone-mapping module derived.

  1. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  2. SU-E-J-100: Reconstruction of Prompt Gamma Ray Three Dimensional SPECT Image From Boron Neutron Capture Therapy(BNCT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, D; Jung, J; Suh, T

    2014-06-01

    Purpose: Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography (SPECT) image from boron neutron capture therapy (BNCT) using Monte Carlo simulation. Methods: In case of simulation, the pixelated SPECT detector, collimator and phantom were simulated using Monte Carlo n particle extended (MCNPX) simulation tool. A thermal neutron source (<1 eV) was used to react with the boron uptake region (BUR) in the phantom. Each geometry had a spherical pattern, and three different BURs (A, B and C region, density: 2.08 g/cm3) were located in the middle of the brain phantom.more » The data from 128 projections for each sorting process were used to achieve image reconstruction. The ordered subset expectation maximization (OSEM) reconstruction algorithm was used to obtain a tomographic image with eight subsets and five iterations. The receiver operating characteristic (ROC) curve analysis was used to evaluate the geometric accuracy of reconstructed image. Results: The OSEM image was compared with the original phantom pattern image. The area under the curve (AUC) was calculated as the gross area under each ROC curve. The three calculated AUC values were 0.738 (A region), 0.623 (B region), and 0.817 (C region). The differences between length of centers of two boron regions and distance of maximum count points were 0.3 cm, 1.6 cm and 1.4 cm. Conclusion: The possibility of extracting a 3D BNCT SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The prospects for obtaining an actual BNCT SPECT image were estimated from the quality of the simulated image and the simulation conditions. When multiple tumor region should be treated using the BNCT, a reasonable model to determine how many useful images can be obtained from the SPECT could be provided to the BNCT facilities. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.200900420) and the Radiation Technology Research and Development program (Grant No.2013043498), Republic of Korea.« less

  3. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  4. Using Multiple Calibration Indices in Order to Capture the Complex Picture of What Affects Students' Accuracy of Feeling of Confidence

    ERIC Educational Resources Information Center

    Boekaerts, Monique; Rozendaal, Jeroen S.

    2010-01-01

    The present study used multiple calibration indices to capture the complex picture of fifth graders' calibration of feeling of confidence in mathematics. Specifically, the effects of gender, type of mathematical problem, instruction method, and time of measurement (before and after problem solving) on calibration skills were investigated. Fourteen…

  5. Deep Impact Autonomous Navigation : the trials of targeting the unknown

    NASA Technical Reports Server (NTRS)

    Kubitschek, Daniel G.; Mastrodemos, Nickolaos; Werner, Robert A.; Kennedy, Brian M.; Synnott, Stephen P.; Null, George W.; Bhaskaran, Shyam; Riedel, Joseph E.; Vaughan, Andrew T.

    2006-01-01

    On July 4, 2005 at 05:44:34.2 UTC the Impactor Spacecraft (s/c) impacted comet Tempel 1 with a relative speed of 10.3 km/s capturing high-resolution images of the surface of a cometary nucleus just seconds before impact. Meanwhile, the Flyby s/c captured the impact event using both the Medium Resolution Imager (MRI) and the High Resolution Imager (HRI) and tracked the nucleus for the entire 800 sec period between impact and shield attitude transition. The objective of the Impactor s/c was to impact in an illuminated area viewable from the Flyby s/c and capture high-resolution context images of the impact site. This was accomplished by using autonomous navigation (AutoNav) algorithms and precise attitude information from the attitude determination and control subsystem (ADCS). The Flyby s/c had two primary objectives: 1) capture the impact event with the highest temporal resolution possible in order to observe the ejecta plume expansion dynamics; and 2) track the impact site for at least 800 sec to observe the crater formation and capture the highest resolution images possible of the fully developed crater. These two objectives were met by estimating the Flyby s/c trajectory relative to Tempel 1 using the same AutoNav algorithms along with precise attitude information from ADCS and independently selecting the best impact site. This paper describes the AutoNav system, what happened during the encounter with Tempel 1 and what could have happened.

  6. Development of a dual-modality, dual-view smartphone-based imaging system for oral cancer detection

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross D.; Song, Bofan; Birur, Praveen; Kuriakose, Moni Abraham; Sunny, Sumsum; Suresh, Amritha; Patrick, Sanjana; Anbarani, Afarin; Spires, Oliver; Wilder-Smith, Petra; Liang, Rongguang

    2018-02-01

    Oral cancer is a rising health issue in many low and middle income countries (LMIC). Proposed is an implementation of autofluorescence imaging (AFI) and white light imaging (WLI) on a smartphone platform providing inexpensive early detection of cancerous conditions in the oral cavity. Interchangeable modules allow both whole mouth imaging for an overview of the patients' oral health and an intraoral imaging probe for localized information. Custom electronics synchronize image capture and external LED operation for the excitation of tissue fluorescence. A custom Android application captures images and an image processing algorithm provides likelihood estimates of cancerous conditions. Finally, all data can be uploaded to a cloud server where a convolutional neural network classifies the images and a remote specialist can provide diagnosis and triage instructions.

  7. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  8. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2010-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  9. A multi-sensor data-driven methodology for all-sky passive microwave inundation retrieval

    NASA Astrophysics Data System (ADS)

    Takbiri, Zeinab; Ebtehaj, Ardeshir M.; Foufoula-Georgiou, Efi

    2017-06-01

    We present a multi-sensor Bayesian passive microwave retrieval algorithm for flood inundation mapping at high spatial and temporal resolutions. The algorithm takes advantage of observations from multiple sensors in optical, short-infrared, and microwave bands, thereby allowing for detection and mapping of the sub-pixel fraction of inundated areas under almost all-sky conditions. The method relies on a nearest-neighbor search and a modern sparsity-promoting inversion method that make use of an a priori dataset in the form of two joint dictionaries. These dictionaries contain almost overlapping observations by the Special Sensor Microwave Imager and Sounder (SSMIS) on board the Defense Meteorological Satellite Program (DMSP) F17 satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Aqua and Terra satellites. Evaluation of the retrieval algorithm over the Mekong Delta shows that it is capable of capturing to a good degree the inundation diurnal variability due to localized convective precipitation. At longer timescales, the results demonstrate consistency with the ground-based water level observations, denoting that the method is properly capturing inundation seasonal patterns in response to regional monsoonal rain. The calculated Euclidean distance, rank-correlation, and also copula quantile analysis demonstrate a good agreement between the outputs of the algorithm and the observed water levels at monthly and daily timescales. The current inundation products are at a resolution of 12.5 km and taken twice per day, but a higher resolution (order of 5 km and every 3 h) can be achieved using the same algorithm with the dictionary populated by the Global Precipitation Mission (GPM) Microwave Imager (GMI) products.

  10. Color appearance and color rendering of HDR scenes: an experiment

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna; Rizzi, Alessandro; McCann, John J.

    2009-01-01

    In order to gain a deeper understanding of the appearance of coloured objects in a three-dimensional scene, the research introduces a multidisciplinary experimental approach. The experiment employed two identical 3-D Mondrians, which were viewed and compared side by side. Each scene was subjected to different lighting conditions. First, we used an illumination cube to diffuse the light and illuminate all the objects from each direction. This produced a low-dynamicrange (LDR) image of the 3-D Mondrian scene. Second, in order to make a high-dynamic range (HDR) image of the same objects, we used a directional 150W spotlight and an array of WLEDs assembled in a flashlight. The scenes were significant as each contained exactly the same three-dimensional painted colour blocks that were arranged in the same position in the still life. The blocks comprised 6 hue colours and 5 tones from white to black. Participants from the CREATE project were asked to consider the change in the appearance of a selection of colours according to lightness, hue, and chroma, and to rate how the change in illumination affected appearance. We measured the light coming to the eye from still-life surfaces with a colorimeter (Yxy). We captured the scene radiance using multiple exposures with a number of different cameras. We have begun a programme of digital image processing of these scene capture methods. This multi-disciplinary programme continues until 2010, so this paper is an interim report on the initial phases and a description of the ongoing project.

  11. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  12. Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera

    NASA Astrophysics Data System (ADS)

    Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.

    2016-08-01

    Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.

  13. System, Apparatus, and Method for Active Debris Removal

    NASA Technical Reports Server (NTRS)

    Hickey, Christopher J. (Inventor); Spehar, Peter T. (Inventor); Griffith, Sr., Anthony D. (Inventor); Kohli, Rajiv (Inventor); Burns, Susan H. (Inventor); Gruber, David J. (Inventor); Lee, David E. (Inventor); Robinson, Travis M. (Inventor); Damico, Stephen J. (Inventor); Smith, Jason T. (Inventor)

    2017-01-01

    Systems, apparatuses, and methods for removal of orbital debris are provided. In one embodiment, an apparatus includes a spacecraft control unit configured to guide and navigate the apparatus to a target. The apparatus also includes a dynamic object characterization unit configured to characterize movement, and a capture feature, of the target. The apparatus further includes a capture and release unit configured to capture a target and deorbit or release the target. The collection of these apparatuses is then employed as multiple, independent and individually operated vehicles launched from a single launch vehicle for the purpose of disposing of multiple debris objects.

  14. Speckle-learning-based object recognition through scattering media.

    PubMed

    Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun

    2015-12-28

    We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.

  15. Estimating occupancy and abundance using aerial images with imperfect detection

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.

    2017-01-01

    Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.

  16. Integrated Georeferencing of Stereo Image Sequences Captured with a Stereovision Mobile Mapping System - Approaches and Practical Results

    NASA Astrophysics Data System (ADS)

    Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.

    2012-07-01

    Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  17. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  18. Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium.

    PubMed

    Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan

    2017-06-01

    Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3-100%) in the test set (n = 217) of manually labeled helminth eggs. In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images.

  19. Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium

    PubMed Central

    Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan

    2017-01-01

    ABSTRACT Background: Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Objective: Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. Methods: A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Results: Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3–100%) in the test set (n = 217) of manually labeled helminth eggs. Conclusions: In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images. PMID:28838305

  20. Multispectral laser-induced fluorescence imaging system for large biological samples

    NASA Astrophysics Data System (ADS)

    Kim, Moon S.; Lefcourt, Alan M.; Chen, Yud-Ren

    2003-07-01

    A laser-induced fluorescence imaging system developed to capture multispectral fluorescence emission images simultaneously from a relatively large target object is described. With an expanded, 355-nm Nd:YAG laser as the excitation source, the system captures fluorescence emission images in the blue, green, red, and far-red regions of the spectrum centered at 450, 550, 678, and 730 nm, respectively, from a 30-cm-diameter target area in ambient light. Images of apples and of pork meat artificially contaminated with diluted animal feces have demonstrated the versatility of fluorescence imaging techniques for potential applications in food safety inspection. Regions of contamination, including sites that were not readily visible to the human eye, could easily be identified from the images.

  1. Embedded image processing engine using ARM cortex-M4 based STM32F407 microcontroller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaiya, Devesh, E-mail: samaiya.devesh@gmail.com

    2014-10-06

    Due to advancement in low cost, easily available, yet powerful hardware and revolution in open source software, urge to make newer, more interactive machines and electronic systems have increased manifold among engineers. To make system more interactive, designers need easy to use sensor systems. Giving the boon of vision to machines was never easy, though it is not impossible these days; it is still not easy and expensive. This work presents a low cost, moderate performance and programmable Image processing engine. This Image processing engine is able to capture real time images, can store the images in the permanent storagemore » and can perform preprogrammed image processing operations on the captured images.« less

  2. Raspberry Pi-powered imaging for plant phenotyping.

    PubMed

    Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A

    2018-03-01

    Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.

  3. A new 4-dimensional imaging system for jaw tracking.

    PubMed

    Lauren, Mark

    2014-01-01

    A non-invasive 4D imaging system that produces high resolution time-based 3D surface data has been developed to capture jaw motion. Fluorescent microspheres are brushed onto both tooth and soft-tissue areas of the upper and lower arches to be imaged. An extraoral hand-held imaging device, operated about 12 cm from the mouth, captures a time-based set of perspective image triplets of the patch areas. Each triplet, containing both upper and lower arch data, is converted to a high-resolution 3D point mesh using photogrammetry, providing the instantaneous relative jaw position. Eight 3D positions per second are captured. Using one of the 3D frames as a reference, a 4D model can be constructed to describe the incremental free body motion of the mandible. The surface data produced by this system can be registered to conventional 3D models of the dentition, allowing them to be animated. Applications include integration into prosthetic CAD and CBCT data.

  4. Visual Object Recognition and Tracking of Tools

    NASA Technical Reports Server (NTRS)

    English, James; Chang, Chu-Yin; Tardella, Neil

    2011-01-01

    A method has been created to automatically build an algorithm off-line, using computer-aided design (CAD) models, and to apply this at runtime. The object type is discriminated, and the position and orientation are identified. This system can work with a single image and can provide improved performance using multiple images provided from videos. The spatial processing unit uses three stages: (1) segmentation; (2) initial type, pose, and geometry (ITPG) estimation; and (3) refined type, pose, and geometry (RTPG) calculation. The image segmentation module files all the tools in an image and isolates them from the background. For this, the system uses edge-detection and thresholding to find the pixels that are part of a tool. After the pixels are identified, nearby pixels are grouped into blobs. These blobs represent the potential tools in the image and are the product of the segmentation algorithm. The second module uses matched filtering (or template matching). This approach is used for condensing synthetic images using an image subspace that captures key information. Three degrees of orientation, three degrees of position, and any number of degrees of freedom in geometry change are included. To do this, a template-matching framework is applied. This framework uses an off-line system for calculating template images, measurement images, and the measurements of the template images. These results are used online to match segmented tools against the templates. The final module is the RTPG processor. Its role is to find the exact states of the tools given initial conditions provided by the ITPG module. The requirement that the initial conditions exist allows this module to make use of a local search (whereas the ITPG module had global scope). To perform the local search, 3D model matching is used, where a synthetic image of the object is created and compared to the sensed data. The availability of low-cost PC graphics hardware allows rapid creation of synthetic images. In this approach, a function of orientation, distance, and articulation is defined as a metric on the difference between the captured image and a synthetic image with an object in the given orientation, distance, and articulation. The synthetic image is created using a model that is looked up in an object-model database. A composable software architecture is used for implementation. Video is first preprocessed to remove sensor anomalies (like dead pixels), and then is processed sequentially by a prioritized list of tracker-identifiers.

  5. Memory-based attention capture when multiple items are maintained in visual working memory.

    PubMed

    Hollingworth, Andrew; Beck, Valerie M

    2016-07-01

    Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search, an index of VWM guidance, is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when 2 colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory detection rate by using facial features and skin color model. To harness all the features in the scene, we further developed another system using multiple types of local descriptors along with Bag-of-Visual Word framework. In addition, an investigation of new contour feature in detecting obscene content is presented.

  7. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  8. Can light-field photography ease focusing on the scalp and oral cavity?

    PubMed

    Taheri, Arash; Feldman, Steven R

    2013-08-01

    Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Quantitative assessment of multiple sclerosis lesion load using CAD and expert input

    NASA Astrophysics Data System (ADS)

    Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.

    2008-03-01

    Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.

  10. Web surveillance system using platform-based design

    NASA Astrophysics Data System (ADS)

    Lin, Shin-Yo; Tsai, Tsung-Han

    2004-04-01

    A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.

  11. A Pixel Correlation Technique for Smaller Telescopes to Measure Doubles

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2013-04-01

    Pixel correlation uses the same reduction techniques as speckle imaging but relies on autocorrelation among captured pixel hits rather than true speckles. A video camera operating at speeds (8-66 milliseconds) similar to lucky imaging to capture 400-1,000 video frames. The AVI files are converted to bitmap images and analyzed using the interferometric algorithms in REDUC using all frames. This results in a series of corellograms from which theta and rho can be measured. Results using a 20 cm (8") Dall-Kirkham working at f22.5 are presented for doubles with separations between 1" to 5.7" under average seeing conditions. I conclude that this form of visualizing and analyzing visual double stars is a viable alternative to lucky imaging that can be employed by telescopes that are too small in aperture to capture a sufficient number of speckles for true speckle interferometry.

  12. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction.

    PubMed

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  13. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction

    NASA Astrophysics Data System (ADS)

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  14. Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network

    NASA Astrophysics Data System (ADS)

    Ong, Jia Jan; Ang, L.-M.; Seng, K. P.

    This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.

  15. A smartphone application for psoriasis segmentation and classification (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas B.; Horita, Timothy; Shi, Kevin; Khan Munia, Tamanna Tabassum; Tavakolian, Kouhyar; Alhashim, Minhal; Fazel-Rezai, Reza

    2017-02-01

    Psoriasis is a chronic skin disease affecting approximately 125 million people worldwide. Currently, dermatologists monitor changes of psoriasis by clinical evaluation or by measuring psoriasis severity scores over time which lead to Subjective management of this condition. The goal of this paper is to develop a reliable assessment system to quantitatively assess the changes of erythema and intensity of scaling of psoriatic lesions. A smartphone deployable mobile application is presented that uses the smartphone camera and cloud-based image processing to analyze physiological characteristics of psoriasis lesions, identify the type and stage of the scaling and erythema. The application targets to automatically evaluate Psoriasis Area Severity Index (PASI) by measuring the severity and extent of psoriasis. The mobile application performs the following core functions: 1) it captures text information from user input to create a profile in a HIPAA compliant database. 2) It captures an image of the skin with psoriasis as well as image-related information entered by the user. 3) The application color correct the image based on environmental lighting condition using calibration process including calibration procedure by capturing Macbeth ColorChecker image. 4) The color-corrected image will be transmitted to a cloud-based engine for image processing. In cloud, first, the algorithm removes the non-skin background to ensure the psoriasis segmentation is only applied to the skin regions. Then, the psoriasis segmentation algorithm estimates the erythema and scaling boundary regions of lesion. We analyzed 10 images of psoriasis images captured by cellphone, determined PASI score for each subject during our pilot study, and correlated it with changes in severity scores given by dermatologists. The success of this work allows smartphone application for psoriasis severity assessment in a long-term treatment.

  16. Digital sensing and sizing of vesicular stomatitis virus pseudotypes in complex media: a model for Ebola and Marburg detection.

    PubMed

    Daaboul, George G; Lopez, Carlos A; Chinnala, Jyothsna; Goldberg, Bennett B; Connor, John H; Ünlü, M Selim

    2014-06-24

    Rapid, sensitive, and direct label-free capture and characterization of nanoparticles from complex media such as blood or serum will broadly impact medicine and the life sciences. We demonstrate identification of virus particles in complex samples for replication-competent wild-type vesicular stomatitis virus (VSV), defective VSV, and Ebola- and Marburg-pseudotyped VSV with high sensitivity and specificity. Size discrimination of the imaged nanoparticles (virions) allows differentiation between modified viruses having different genome lengths and facilitates a reduction in the counting of nonspecifically bound particles to achieve a limit-of-detection (LOD) of 5 × 10(3) pfu/mL for the Ebola and Marburg VSV pseudotypes. We demonstrate the simultaneous detection of multiple viruses in a single sample (composed of serum or whole blood) for screening applications and uncompromised detection capabilities in samples contaminated with high levels of bacteria. By employing affinity-based capture, size discrimination, and a "digital" detection scheme to count single virus particles, we show that a robust and sensitive virus/nanoparticle sensing assay can be established for targets in complex samples. The nanoparticle microscopy system is termed the Single Particle Interferometric Reflectance Imaging Sensor (SP-IRIS) and is capable of high-throughput and rapid sizing of large numbers of biological nanoparticles on an antibody microarray for research and diagnostic applications.

  17. TH-CD-201-10: Highly Efficient Synchronized High-Speed Scintillation Camera System for Measuring Proton Range, SOBP and Dose Distributions in a 2D-Plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goddu, S; Sun, B; Grantham, K

    2016-06-15

    Purpose: Proton therapy (PT) delivery is complex and extremely dynamic. Therefore, quality assurance testing is vital, but highly time-consuming. We have developed a High-Speed Scintillation-Camera-System (HS-SCS) for simultaneously measuring multiple beam characteristics. Methods: High-speed camera was placed in a light-tight housing and dual-layer neutron shield. HS-SCS is synchronized with a synchrocyclotron to capture individual proton-beam-pulses (PBPs) at ∼504 frames/sec. The PBPs from synchrocyclotron trigger the HS-SCS to open its shutter for programmed exposure-time. Light emissions within 30×30×5cm3 plastic-scintillator (BC-408) were captured by a CCD-camera as individual images revealing dose-deposition in a 2D-plane with a resolution of 0.7mm for range andmore » SOBP measurements and 1.67mm for profiles. The CCD response as well as signal to noise ratio (SNR) was characterized for varying exposure times, gains for different light intensities using a TV-Optoliner system. Software tools were developed to analyze ∼5000 images to extract different beam parameters. Quenching correction-factors were established by comparing scintillation Bragg-Peaks with water scanned ionization-chamber measurements. Quenching corrected Bragg-peaks were integrated to ascertain proton-beam range (PBR), width of Spared-Out-Bragg-Peak (MOD) and distal.« less

  18. Live Cell Imaging and 3D Analysis of Angiotensin Receptor Type 1a Trafficking in Transfected Human Embryonic Kidney Cells Using Confocal Microscopy.

    PubMed

    Kadam, Parnika; McAllister, Ryan; Urbach, Jeffrey S; Sandberg, Kathryn; Mueller, Susette C

    2017-03-27

    Live-cell imaging is used to simultaneously capture time-lapse images of angiotensin type 1a receptors (AT1aR) and intracellular compartments in transfected human embryonic kidney-293 (HEK) cells following stimulation with angiotensin II (Ang II). HEK cells are transiently transfected with plasmid DNA containing AT1aR tagged with enhanced green fluorescent protein (EGFP). Lysosomes are identified with a red fluorescent dye. Live-cell images are captured on a laser scanning confocal microscope after Ang II stimulation and analyzed by software in three dimensions (3D, voxels) over time. Live-cell imaging enables investigations into receptor trafficking and avoids confounds associated with fixation, and in particular, the loss or artefactual displacement of EGFP-tagged membrane receptors. Thus, as individual cells are tracked through time, the subcellular localization of receptors can be imaged and measured. Images must be acquired sufficiently rapidly to capture rapid vesicle movement. Yet, at faster imaging speeds, the number of photons collected is reduced. Compromises must also be made in the selection of imaging parameters like voxel size in order to gain imaging speed. Significant applications of live-cell imaging are to study protein trafficking, migration, proliferation, cell cycle, apoptosis, autophagy and protein-protein interaction and dynamics, to name but a few.

  19. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  20. Investigating Mars: Russell Crater - False Color

    NASA Image and Video Library

    2017-08-11

    This image shows the western part of the dune field on the floor of Russell Crater. This is a false color image of Russell crater and it's surroundings. Sand Dunes usually appear "blue" in false color images. Russell Crater is located in Noachis Terra. A spectacular dune ridge and other dune forms on the crater floor have caused extensive imaging. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 59591 Latitude: -54.471 Longitude: 13.1288 Instrument: VIS Captured: 2015-05-21 10:57 https://photojournal.jpl.nasa.gov/catalog/PIA21808

Top