Terrain detection and classification using single polarization SAR
Chow, James G.; Koch, Mark W.
2016-01-19
The various technologies presented herein relate to identifying manmade and/or natural features in a radar image. Two radar images (e.g., single polarization SAR images) can be captured for a common scene. The first image is captured at a first instance and the second image is captured at a second instance, whereby the duration between the captures are of sufficient time such that temporal decorrelation occurs for natural surfaces in the scene, and only manmade surfaces, e.g., a road, produce correlated pixels. A LCCD image comprising the correlated and decorrelated pixels can be generated from the two radar images. A median image can be generated from a plurality of radar images, whereby any features in the median image can be identified. A superpixel operation can be performed on the LCCD image and the median image, thereby enabling a feature(s) in the LCCD image to be classified.
Display of travelling 3D scenes from single integral-imaging capture
NASA Astrophysics Data System (ADS)
Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro
2016-06-01
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
2013 R&D 100 Award: Movie-mode electron microscope captures nanoscale
Lagrange, Thomas; Reed, Bryan
2018-01-26
A new instrument developed by LLNL scientists and engineers, the Movie Mode Dynamic Transmission Electron Microscope (MM-DTEM), captures billionth-of-a-meter-scale images with frame rates more than 100,000 times faster than those of conventional techniques. The work was done in collaboration with a Pleasanton-based company, Integrated Dynamic Electron Solutions (IDES) Inc. Using this revolutionary imaging technique, a range of fundamental and technologically important material and biological processes can be captured in action, in complete billionth-of-a-meter detail, for the first time. The primary application of MM-DTEM is the direct observation of fast processes, including microstructural changes, phase transformations and chemical reactions, that shape real-world performance of nanostructured materials and potentially biological entities. The instrument could prove especially valuable in the direct observation of macromolecular interactions, such as protein-protein binding and host-pathogen interactions. While an earlier version of the technology, Single Shot-DTEM, could capture a single snapshot of a rapid process, MM-DTEM captures a multiframe movie that reveals complex sequences of events in detail. It is the only existing technology that can capture multiple electron microscopy images in the span of a single microsecond.
2013 R&D 100 Award: Movie-mode electron microscope captures nanoscale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagrange, Thomas; Reed, Bryan
2014-04-03
A new instrument developed by LLNL scientists and engineers, the Movie Mode Dynamic Transmission Electron Microscope (MM-DTEM), captures billionth-of-a-meter-scale images with frame rates more than 100,000 times faster than those of conventional techniques. The work was done in collaboration with a Pleasanton-based company, Integrated Dynamic Electron Solutions (IDES) Inc. Using this revolutionary imaging technique, a range of fundamental and technologically important material and biological processes can be captured in action, in complete billionth-of-a-meter detail, for the first time. The primary application of MM-DTEM is the direct observation of fast processes, including microstructural changes, phase transformations and chemical reactions, that shapemore » real-world performance of nanostructured materials and potentially biological entities. The instrument could prove especially valuable in the direct observation of macromolecular interactions, such as protein-protein binding and host-pathogen interactions. While an earlier version of the technology, Single Shot-DTEM, could capture a single snapshot of a rapid process, MM-DTEM captures a multiframe movie that reveals complex sequences of events in detail. It is the only existing technology that can capture multiple electron microscopy images in the span of a single microsecond.« less
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Gul, M. Shahzeb Khan; Gunturk, Bahadir K.
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.
Gul, M Shahzeb Khan; Gunturk, Bahadir K
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
Platform control for space-based imaging: the TOPSAT mission
NASA Astrophysics Data System (ADS)
Dungate, D.; Morgan, C.; Hardacre, S.; Liddle, D.; Cropp, A.; Levett, W.; Price, M.; Steyn, H.
2004-11-01
This paper describes the imaging mode ADCS design for the TOPSAT satellite, an Earth observation demonstration mission targeted at military applications. The baselined orbit for TOPSAT is a 600-700km sun synchronous orbit from which images up to 30° off track can be captured. For this baseline, the imaging camera proves a resolution of 2.5m and a nominal image size of 15x15km. The ADCS design solution for the imaging mode uses a moving demand approach to enable a single control algorithm solution for both the preparatory reorientation prior to image capture and the post capture return to nadir pointing. During image capture proper, control is suspended to minimise the disturbances experienced by the satellite from the wheels. Prior to each imaging sequence, the moving demand attitude and rate profiles are calculated such that the correct attitude and rate are achieved at the correct orbital position, enabling the correct target area to be captured.
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo
2008-01-01
Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634
Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo
2008-07-16
Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
Chang, Sung-A; Lee, Sang-Chol; Kim, Eun-Young; Hahm, Seung-Hee; Jang, Shin Yi; Park, Sung-Ji; Choi, Jin-Oh; Park, Seung Woo; Choe, Yeon Hyeon; Oh, Jae K
2011-08-01
With recent developments in echocardiographic technology, a new system using real-time three-dimensional echocardiography (RT3DE) that allows single-beat acquisition of the entire volume of the left ventricle and incorporates algorithms for automated border detection has been introduced. Provided that these techniques are acceptably reliable, three-dimensional echocardiography may be much more useful for clinical practice. The aim of this study was to evaluate the feasibility and accuracy of left ventricular (LV) volume measurements by RT3DE using the single-beat full-volume capture technique. One hundred nine consecutive patients scheduled for cardiac magnetic resonance imaging and RT3DE using the single-beat full-volume capture technique on the same day were recruited. LV end-systolic volume, end-diastolic volume, and ejection fraction were measured using an auto-contouring algorithm from data acquired on RT3DE. The data were compared with the same measurements obtained using cardiac magnetic resonance imaging. Volume measurements on RT3DE with single-beat full-volume capture were feasible in 84% of patients. Both interobserver and intraobserver variability of three-dimensional measurements of end-systolic and end-diastolic volumes showed excellent agreement. Pearson's correlation analysis showed a close correlation of end-systolic and end-diastolic volumes between RT3DE and cardiac magnetic resonance imaging (r = 0.94 and r = 0.91, respectively, P < .0001 for both). Bland-Altman analysis showed reasonable limits of agreement. After application of the auto-contouring algorithm, the rate of successful auto-contouring (cases requiring minimal manual corrections) was <50%. RT3DE using single-beat full-volume capture is an easy and reliable technique to assess LV volume and systolic function in clinical practice. However, the image quality and low frame rate still limit its application for dilated left ventricles, and the automated volume analysis program needs more development to make it clinically efficacious. Copyright © 2011 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
Real-time Avatar Animation from a Single Image.
Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F
2011-01-01
A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.
Real-time Avatar Animation from a Single Image
Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.
2014-01-01
A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
Parallel Computing for the Computed-Tomography Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2008-01-01
This software computes the tomographic reconstruction of spatial-spectral data from raw detector images of the Computed-Tomography Imaging Spectrometer (CTIS), which enables transient-level, multi-spectral imaging by capturing spatial and spectral information in a single snapshot.
Rectification of curved document images based on single view three-dimensional reconstruction.
Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang
2016-10-01
Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.
NASA Technical Reports Server (NTRS)
Camci, C.; Kim, K.; Hippensteele, S. A.
1992-01-01
A new image processing based color capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies is presented. This method is highly applicable to the surfaces exposed to convective heating in gas turbine engines. It is shown that, in the single-crystal mode, many of the colors appearing on the heat transfer surface correlate strongly with the local temperature. A very accurate quantitative approach using an experimentally determined linear hue vs temperature relation is found to be possible. The new hue-capturing process is discussed in terms of the strength of the light source illuminating the heat transfer surface, the effect of the orientation of the illuminating source with respect to the surface, crystal layer uniformity, and the repeatability of the process. The present method is more advantageous than the multiple filter method because of its ability to generate many isotherms simultaneously from a single-crystal image at a high resolution in a very time-efficient manner.
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Carter, Erik P; Seymour, Elif Ç; Scherr, Steven M; Daaboul, George G; Freedman, David S; Selim Ünlü, M; Connor, John H
2017-01-01
This chapter describes an approach for the label-free imaging and quantification of intact Ebola virus (EBOV) and EBOV viruslike particles (VLPs) using a light microscopy technique. In this technique, individual virus particles are captured onto a silicon chip that has been printed with spots of virus-specific capture antibodies. These captured virions are then detected using an optical approach called interference reflectance imaging. This approach allows for the detection of each virus particle that is captured on an antibody spot and can resolve the filamentous structure of EBOV VLPs without the need for electron microscopy. Capture of VLPs and virions can be done from a variety of sample types ranging from tissue culture medium to blood. The technique also allows automated quantitative analysis of the number of virions captured. This can be used to identify the virus concentration in an unknown sample. In addition, this technique offers the opportunity to easily image virions captured from native solutions without the need for additional labeling approaches while offering a means of assessing the range of particle sizes and morphologies in a quantitative manner.
Scheimpflug with computational imaging to extend the depth of field of iris recognition systems
NASA Astrophysics Data System (ADS)
Sinharoy, Indranil
Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-03-05
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
4D multiple-cathode ultrafast electron microscopy
Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H.
2014-01-01
Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging. PMID:25006261
4D multiple-cathode ultrafast electron microscopy.
Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H
2014-07-22
Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging.
Comparison of digital intraoral scanners by single-image capture system and full-color movie system.
Yamamoto, Meguru; Kataoka, Yu; Manabe, Atsufumi
2017-01-01
The use of dental computer-aided design/computer-aided manufacturing (CAD/CAM) restoration is rapidly increasing. This study was performed to evaluate the marginal and internal cement thickness and the adhesive gap of internal cavities comprising CAD/CAM materials using two digital impression acquisition methods and micro-computed tomography. Images obtained by a single-image acquisition system (Bluecam Ver. 4.0) and a full-color video acquisition system (Omnicam Ver. 4.2) were divided into the BL and OM groups, respectively. Silicone impressions were prepared from an ISO-standard metal mold, and CEREC Stone BC and New Fuji Rock IMP were used to create working models (n=20) in the BL and OM groups (n=10 per group), respectively. Individual inlays were designed in a conventional manner using designated software, and all restorations were prepared using CEREC inLab MC XL. These were assembled with the corresponding working models used for measurement, and the level of fit was examined by three-dimensional analysis based on micro-computed tomography. Significant differences in the marginal and internal cement thickness and adhesive gap spacing were found between the OM and BL groups. The full-color movie capture system appears to be a more optimal restoration system than the single-image capture system.
Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K.; Wong, Stephen T. C.
2016-01-01
In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications. PMID:27375938
Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation
NASA Astrophysics Data System (ADS)
Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas
2013-03-01
High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
High-dynamic-range imaging for cloud segmentation
NASA Astrophysics Data System (ADS)
Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan
2018-04-01
Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
Design framework for a spectral mask for a plenoptic camera
NASA Astrophysics Data System (ADS)
Berkner, Kathrin; Shroff, Sapna A.
2012-01-01
Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.
NASA Astrophysics Data System (ADS)
Mefleh, Fuad N.; Baker, G. Hamilton; Kwartowitz, David M.
2014-03-01
In our previous work we presented a novel image-guided surgery (IGS) system, Kit for Navigation by Image Focused Exploration (KNIFE).1,2 KNIFE has been demonstrated to be effective in guiding mock clinical procedures with the tip of an electromagnetically tracked catheter overlaid onto a pre-captured bi-plane fluoroscopic loop. Representation of the catheter in KNIFE differs greatly from what is captured by the fluoroscope, due to distortions and other properties of fluoroscopic images. When imaged by a fluoroscope, catheters can be visualized due to the inclusion of radiopaque materials (i.e. Bi, Ba, W) in the polymer blend.3 However, in KNIFE catheter location is determined using a single tracking seed located in the catheter tip that is represented as a single point overlaid on pre-captured fluoroscopic images. To bridge the gap in catheter representation between KNIFE and traditional methods we constructed a catheter with five tracking seeds positioned along the distal 70 mm of the catheter. We have currently investigated the use of four spline interpolation methods for estimation of true catheter shape and have assesed the error in their estimation of true catheter shape. In this work we present a method for the evaluation of interpolation algorithms with respect to catheter shape determination.
NASA Astrophysics Data System (ADS)
Kabra, Saurabh; Kelleher, Joe; Kockelmann, Winfried; Gutmann, Matthias; Tremsin, Anton
2016-09-01
Single crystals of a partially twinned magnetic shape memory alloy, Ni2MnGa, were imaged using neutron diffraction and energy-resolved imaging techniques at the ISIS spallation neutron source. Single crystal neutron diffraction showed that the crystal produces two twin variants with a specific crystallographic relationship. Transmission images were captured using a time of flight MCP/Timepix neutron counting detector. The twinned and untwinned regions were clearly distinguishable in images corresponding to narrow-energy transmission images. Further, the spatially-resolved transmission spectra were used to elucidate the orientations of the crystallites in the different volumes of the crystal.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-01-01
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599
Capturing the plenoptic function in a swipe
NASA Astrophysics Data System (ADS)
Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi
2016-09-01
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
Compressive Coded-Aperture Multimodal Imaging Systems
NASA Astrophysics Data System (ADS)
Rueda-Chacon, Hoover F.
Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.
Depth-aware image seam carving.
Shen, Jianbing; Wang, Dapeng; Li, Xuelong
2013-10-01
Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.
Single-molecule imaging of DNA polymerase I (Klenow fragment) activity by atomic force microscopy
NASA Astrophysics Data System (ADS)
Chao, J.; Zhang, P.; Wang, Q.; Wu, N.; Zhang, F.; Hu, J.; Fan, C. H.; Li, B.
2016-03-01
We report a DNA origami-facilitated single-molecule platform that exploits atomic force microscopy to study DNA replication. We imaged several functional activities of the Klenow fragment of E. coli DNA polymerase I (KF) including binding, moving, and dissociation from the template DNA. Upon completion of these actions, a double-stranded DNA molecule was formed. Furthermore, the direction of KF activities was captured and then confirmed by shifting the KF binding sites on the template DNA.We report a DNA origami-facilitated single-molecule platform that exploits atomic force microscopy to study DNA replication. We imaged several functional activities of the Klenow fragment of E. coli DNA polymerase I (KF) including binding, moving, and dissociation from the template DNA. Upon completion of these actions, a double-stranded DNA molecule was formed. Furthermore, the direction of KF activities was captured and then confirmed by shifting the KF binding sites on the template DNA. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06544e
Introducing the depth transfer curve for 3D capture system characterization
NASA Astrophysics Data System (ADS)
Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas
2011-03-01
3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.
Plenoptic imaging with second-order correlations of light
NASA Astrophysics Data System (ADS)
Pepe, Francesco V.; Scarcelli, Giuliano; Garuccio, Augusto; D'Angelo, Milena
2016-01-01
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable tridimensional imaging in a single shot. We demonstrate that it is possible to implement plenoptic imaging through second-order correlations of chaotic light, thus enabling to overcome the typical limitations of classical plenoptic devices.
NASA Astrophysics Data System (ADS)
Sewell, Everest; Ferguson, Kevin; Jacobs, Jeffrey; Greenough, Jeff; Krivets, Vitaliy
2016-11-01
We describe experiments of single-shock Richtmyer-Meskhov Instability (RMI) performed on the shock tube apparatus at the University of Arizona in which the initial conditions are volumetrically imaged prior to shock wave arrival. Initial perturbations play a major role in the evolution of RMI, and previous experimental efforts only capture a single plane of the initial condition. The method presented uses a rastered laser sheet to capture additional images throughout the depth of the initial condition immediately before the shock arrival time. These images are then used to reconstruct a volumetric approximation of the experimental perturbation. Analysis of the initial perturbations is performed, and then used as initial conditions in simulations using the hydrodynamics code ARES, developed at Lawrence Livermore National Laboratory (LLNL). Experiments are presented and comparisons are made with simulation results.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
Surface chemistry and morphology in single particle optical imaging
NASA Astrophysics Data System (ADS)
Ekiz-Kanik, Fulya; Sevenler, Derin Deniz; Ünlü, Neşe Lortlar; Chiari, Marcella; Ünlü, M. Selim
2017-05-01
Biological nanoparticles such as viruses and exosomes are important biomarkers for a range of medical conditions, from infectious diseases to cancer. Biological sensors that detect whole viruses and exosomes with high specificity, yet without additional labeling, are promising because they reduce the complexity of sample preparation and may improve measurement quality by retaining information about nanoscale physical structure of the bio-nanoparticle (BNP). Towards this end, a variety of BNP biosensor technologies have been developed, several of which are capable of enumerating the precise number of detected viruses or exosomes and analyzing physical properties of each individual particle. Optical imaging techniques are promising candidates among broad range of label-free nanoparticle detectors. These imaging BNP sensors detect the binding of single nanoparticles on a flat surface functionalized with a specific capture molecule or an array of multiplexed capture probes. The functionalization step confers all molecular specificity for the sensor's target but can introduce an unforeseen problem; a rough and inhomogeneous surface coating can be a source of noise, as these sensors detect small local changes in optical refractive index. In this paper, we review several optical technologies for label-free BNP detectors with a focus on imaging systems. We compare the surface-imaging methods including dark-field, surface plasmon resonance imaging and interference reflectance imaging. We discuss the importance of ensuring consistently uniform and smooth surface coatings of capture molecules for these types of biosensors and finally summarize several methods that have been developed towards addressing this challenge.
Ultrafast chirped optical waveform recording using referenced heterodyning and a time microscope
Bennett, Corey Vincent
2010-06-15
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Ultrafast chirped optical waveform recorder using referenced heterodyning and a time microscope
Bennett, Corey Vincent [Livermore, CA
2011-11-22
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Integral imaging with Fourier-plane recording
NASA Astrophysics Data System (ADS)
Martínez-Corral, M.; Barreiro, J. C.; Llavador, A.; Sánchez-Ortiga, E.; Sola-Pikabea, J.; Scrofani, G.; Saavedra, G.
2017-05-01
Integral Imaging is well known for its capability of recording both the spatial and the angular information of threedimensional (3D) scenes. Based on such an idea, the plenoptic concept has been developed in the past two decades, and therefore a new camera has been designed with the capacity of capturing the spatial-angular information with a single sensor and after a single shot. However, the classical plenoptic design presents two drawbacks, one is the oblique recording made by external microlenses. Other is loss of information due to diffraction effects. In this contribution report a change in the paradigm and propose the combination of telecentric architecture and Fourier-plane recording. This new capture geometry permits substantial improvements in resolution, depth of field and computation time
NASA Astrophysics Data System (ADS)
Sewell, Everest; Ferguson, Kevin; Greenough, Jeffrey; Jacobs, Jeffrey
2014-11-01
We describe new experiments of single shock Richtmeyer-Meshkov Instability (RMI) performed on the shock tube apparatus at the University of Arizona in which the initial conditions are volumetrically imaged prior to shock wave arrival. Initial perturbation plays a major role in the evolution of RMI, and previous experimental efforts only capture a narrow slice of the initial condition. The method presented uses a rastered laser sheet to capture additional images in the depth of the initial condition shortly before the experimental start time. These images are then used to reconstruct a volumetric approximation of the experimental perturbation, which is simulated using the hydrodynamics code ARES, developed at Lawrence Livermore National Laboratory (LLNL). Comparison is made between the time evolution of the interface width and the mixedness ratio measured from the experiments against the predictions from the numerical simulations.
Plenoptic background oriented schlieren imaging
NASA Astrophysics Data System (ADS)
Klemkowsky, Jenna N.; Fahringer, Timothy W.; Clifford, Christopher J.; Bathel, Brett F.; Thurow, Brian S.
2017-09-01
The combination of the background oriented schlieren (BOS) technique with the unique imaging capabilities of a plenoptic camera, termed plenoptic BOS, is introduced as a new addition to the family of schlieren techniques. Compared to conventional single camera BOS, plenoptic BOS is capable of sampling multiple lines-of-sight simultaneously. Displacements from each line-of-sight are collectively used to build a four-dimensional displacement field, which is a vector function structured similarly to the original light field captured in a raw plenoptic image. The displacement field is used to render focused BOS images, which qualitatively are narrow depth of field slices of the density gradient field. Unlike focused schlieren methods that require manually changing the focal plane during data collection, plenoptic BOS synthetically changes the focal plane position during post-processing, such that all focal planes are captured in a single snapshot. Through two different experiments, this work demonstrates that plenoptic BOS is capable of isolating narrow depth of field features, qualitatively inferring depth, and quantitatively estimating the location of disturbances in 3D space. Such results motivate future work to transition this single-camera technique towards quantitative reconstructions of 3D density fields.
NASA Astrophysics Data System (ADS)
Bolan, Jeffrey; Hall, Elise; Clifford, Chris; Thurow, Brian
The Light-Field Imaging Toolkit (LFIT) is a collection of MATLAB functions designed to facilitate the rapid processing of raw light field images captured by a plenoptic camera. An included graphical user interface streamlines the necessary post-processing steps associated with plenoptic images. The generation of perspective shifted views and computationally refocused images is supported, in both single image and animated formats. LFIT performs necessary calibration, interpolation, and structuring steps to enable future applications of this technology.
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor); Bearman, Gregory H. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTISs") employing a single lens are provided. The CTISs may be either transmissive or reflective, and the single lens is either configured to transmit and receive uncollimated light (in transmissive systems), or is configured to reflect and receive uncollimated light (in reflective systems). An exemplary transmissive CTIS includes a focal plane array detector, a single lens configured to transmit and receive uncollimated light, a two-dimensional grating, and a field stop aperture. An exemplary reflective CTIS includes a focal plane array detector, a single mirror configured to reflect and receive uncollimated light, a two-dimensional grating, and a field stop aperture.
Porter, Glenn; Ebeyan, Robert; Crumlish, Charles; Renshaw, Adrian
2015-03-01
The photographic preservation of fingermark impression evidence found on ammunition cases remains problematic due to the cylindrical shape of the deposition substrate preventing complete capture of the impression in a single image. A novel method was developed for the photographic recovery of fingermarks from curved surfaces using digital imaging. The process involves the digital construction of a complete impression image made from several different images captured from multiple camera perspectives. Fingermark impressions deposited onto 9-mm and 0.22-caliber brass cartridge cases and a plastic 12-gauge shotgun shell were tested using various image parameters, including digital stitching method, number of images per 360° rotation of shell, image cropping, and overlap. The results suggest that this method may be successfully used to recover fingermark impression evidence from the surfaces of ammunition cases or other similar cylindrical surfaces. © 2014 American Academy of Forensic Sciences.
Single-shot three-dimensional reconstruction based on structured light line pattern
NASA Astrophysics Data System (ADS)
Wang, ZhenZhou; Yang, YongMing
2018-07-01
Reconstruction of the object by single-shot is of great importance in many applications, in which the object is moving or its shape is non-rigid and changes irregularly. In this paper, we propose a single-shot structured light 3D imaging technique that calculates the phase map from the distorted line pattern. This technique makes use of the image processing techniques to segment and cluster the projected structured light line pattern from one single captured image. The coordinates of the clustered lines are extracted to form a low-resolution phase matrix which is then transformed to full-resolution phase map by spline interpolation. The 3D shape of the object is computed from the full-resolution phase map and the 2D camera coordinates. Experimental results show that the proposed method was able to reconstruct the three-dimensional shape of the object robustly from one single image.
Single Molecule Visualization of Protein-DNA Complexes: Watching Machines at Work
NASA Astrophysics Data System (ADS)
Kowalczykowski, Stephen
2013-03-01
We can now watch individual proteins acting on single molecules of DNA. Such imaging provides unprecedented interrogation of fundamental biophysical processes. Visualization is achieved through the application of two complementary procedures. In one, single DNA molecules are attached to a polystyrene bead and are then captured by an optical trap. The DNA, a worm-like coil, is extended either by the force of solution flow in a micro-fabricated channel, or by capturing the opposite DNA end in a second optical trap. In the second procedure, DNA is attached by one end to a glass surface. The coiled DNA is elongated either by continuous solution flow or by subsequently tethering the opposite end to the surface. Protein action is visualized by fluorescent reporters: fluorescent dyes that bind double-stranded DNA (dsDNA), fluorescent biosensors for single-stranded DNA (ssDNA), or fluorescently-tagged proteins. Individual molecules are imaged using either epifluorescence microscopy or total internal reflection fluorescence (TIRF) microscopy. Using these approaches, we imaged the search for DNA sequence homology conducted by the RecA-ssDNA filament. The manner by which RecA protein finds a single homologous sequence in the genome had remained undefined for almost 30 years. Single-molecule imaging revealed that the search occurs through a mechanism termed ``intersegmental contact sampling,'' in which the randomly coiled structure of DNA is essential for reiterative sampling of DNA sequence identity: an example of parallel processing. In addition, the assembly of RecA filaments on single molecules of single-stranded DNA was visualized. Filament assembly requires nucleation of a protein dimer on DNA, and subsequent growth occurs via monomer addition. Furthermore, we discovered a class of proteins that catalyzed both nucleation and growth of filaments, revealing how the cell controls assembly of this protein-DNA complex.
Thege, Fredrik I; Lannin, Timothy B; Saha, Trisha N; Tsai, Shannon; Kochman, Michael L; Hollingsworth, Michael A; Rhim, Andrew D; Kirby, Brian J
2014-05-21
We have developed and optimized a microfluidic device platform for the capture and analysis of circulating pancreatic cells (CPCs) and pancreatic circulating tumor cells (CTCs). Our platform uses parallel anti-EpCAM and cancer-specific mucin 1 (MUC1) immunocapture in a silicon microdevice. Using a combination of anti-EpCAM and anti-MUC1 capture in a single device, we are able to achieve efficient capture while extending immunocapture beyond single marker recognition. We also have detected a known oncogenic KRAS mutation in cells spiked in whole blood using immunocapture, RNA extraction, RT-PCR and Sanger sequencing. To allow for downstream single-cell genetic analysis, intact nuclei were released from captured cells by using targeted membrane lysis. We have developed a staining protocol for clinical samples, including standard CTC markers; DAPI, cytokeratin (CK) and CD45, and a novel marker of carcinogenesis in CPCs, mucin 4 (MUC4). We have also demonstrated a semi-automated approach to image analysis and CPC identification, suitable for clinical hypothesis generation. Initial results from immunocapture of a clinical pancreatic cancer patient sample show that parallel capture may capture more of the heterogeneity of the CPC population. With this platform, we aim to develop a diagnostic biomarker for early pancreatic carcinogenesis and patient risk stratification.
Superresolution with the focused plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew
2011-03-01
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
Al Hares, Ghaith; Eschweiler, Jörg; Radermacher, Klaus
2015-06-01
The development of detailed and specific knowledge on the biomechanical behavior of loaded knee structures has received increased attention in recent years. Stress magnetic resonance imaging techniques have been introduced in previous work to study knee kinematics under load conditions. Previous studies captured the knee movement either in atypical loading supine positions, or in upright positions with help of inclined supporting backrests being insufficient for movement capture under full-body weight-bearing conditions. In this work, we used a combined magnetic resonance imaging approach for measurement and assessment in knee kinematics under full-body weight-bearing in single legged stance. The proposed method is based on registration of high-resolution static magnetic resonance imaging data acquired in supine position with low-resolution data, quasi-static upright-magnetic resonance imaging data acquired in loaded positions for different degrees of knee flexion. The proposed method was applied for the measurement of tibiofemoral kinematics in 10 healthy volunteers. The combined magnetic resonance imaging approach allows the non-invasive measurement of knee kinematics in single legged stance and under physiological loading conditions. We believe that this method can provide enhanced understanding of the loaded knee kinematics. © IMechE 2015.
NASA Astrophysics Data System (ADS)
Boaggio, K.; Bandamede, M.; Bancroft, L.; Hurler, K.; Magee, N. B.
2016-12-01
We report on details of continuing instrument development and deployment of a novel balloon-borne device for capturing and characterizing atmospheric ice and aerosol particles, the Ice Cryo Encapsulator by Balloon (ICE-Ball). The device is designed to capture and preserve cirrus ice particles, maintaining them at cold equilibrium temperatures, so that high-altitude particles can recovered, transferred intact, and then imaged under SEM at an unprecedented resolution (approximately 3 nm maximum resolution). In addition to cirrus ice particles, high altitude aerosol particles are also captured, imaged, and analyzed for geometry, chemical composition, and activity as ice nucleating particles. Prototype versions of ICE-Ball have successfully captured and preserved high altitude ice particles and aerosols, then returned them for recovery and SEM imaging and analysis. New improvements include 1) ability to capture particles from multiple narrowly-defined altitudes on a single payload, 2) high quality measurements of coincident temperature, humidity, and high-resolution video at capture altitude, 3) ability to capture particles during both ascent and descent, 4) better characterization of particle collection volume and collection efficiency, and 5) improved isolation and characterization of capture-cell cryo environment. This presentation provides detailed capability specifications for anyone interested in using measurements, collaborating on continued instrument development, or including this instrument in ongoing or future field campaigns.
Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope
Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok
2017-01-01
Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243
Label inspection of approximate cylinder based on adverse cylinder panorama
NASA Astrophysics Data System (ADS)
Lin, Jianping; Liao, Qingmin; He, Bei; Shi, Chenbo
2013-12-01
This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
Agreement and reading time for differently-priced devices for the digital capture of X-ray films.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-03-01
We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.
2017-07-13
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 59750 Latitude: -10.5452 Longitude: 290.307 Instrument: VIS Captured: 2015-06-03 12:33 https://photojournal.jpl.nasa.gov/catalog/PIA21705
2015-08-21
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 10289 Latitude: -9.9472 Longitude: 285.933 Instrument: VIS Captured: 2004-04-09 12:43 http://photojournal.jpl.nasa.gov/catalog/PIA19756
Optical cell monitoring system for underwater targets
NASA Astrophysics Data System (ADS)
Moon, SangJun; Manzur, Fahim; Manzur, Tariq; Demirci, Utkan
2008-10-01
We demonstrate a cell based detection system that could be used for monitoring an underwater target volume and environment using a microfluidic chip and charge-coupled-device (CCD). This technique allows us to capture specific cells and enumerate these cells on a large area on a microchip. The microfluidic chip and a lens-less imaging platform were then merged to monitor cell populations and morphologies as a system that may find use in distributed sensor networks. The chip, featuring surface chemistry and automatic cell imaging, was fabricated from a cover glass slide, double sided adhesive film and a transparent Polymethlymetacrylate (PMMA) slab. The optically clear chip allows detecting cells with a CCD sensor. These chips were fabricated with a laser cutter without the use of photolithography. We utilized CD4+ cells that are captured on the floor of a microfluidic chip due to the ability to address specific target cells using antibody-antigen binding. Captured CD4+ cells were imaged with a fluorescence microscope to verify the chip specificity and efficiency. We achieved 70.2 +/- 6.5% capturing efficiency and 88.8 +/- 5.4% specificity for CD4+ T lymphocytes (n = 9 devices). Bright field images of the captured cells in the 24 mm × 4 mm × 50 μm microfluidic chip were obtained with the CCD sensor in one second. We achieved an inexpensive system that rapidly captures cells and images them using a lens-less CCD system. This microfluidic device can be modified for use in single cell detection utilizing a cheap light-emitting diode (LED) chip instead of a wide range CCD system.
Metasurface optics for full-color computational imaging.
Colburn, Shane; Zhan, Alan; Majumdar, Arka
2018-02-01
Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.
Single photon detection imaging of Cherenkov light emitted during radiation therapy
NASA Astrophysics Data System (ADS)
Adamson, Philip M.; Andreozzi, Jacqueline M.; LaRochelle, Ethan; Gladstone, David J.; Pogue, Brian W.
2018-03-01
Cherenkov imaging during radiation therapy has been developed as a tool for dosimetry, which could have applications in patient delivery verification or in regular quality audit. The cameras used are intensified imaging sensors, either ICCD or ICMOS cameras, which allow important features of imaging, including: (1) nanosecond time gating, (2) amplification by 103-104, which together allow for imaging which has (1) real time capture at 10-30 frames per second, (2) sensitivity at the level of single photon event level, and (3) ability to suppress background light from the ambient room. However, the capability to achieve single photon imaging has not been fully analyzed to date, and as such was the focus of this study. The ability to quantitatively characterize how a single photon event appears in amplified camera imaging from the Cherenkov images was analyzed with image processing. The signal seen at normal gain levels appears to be a blur of about 90 counts in the CCD detector, after going through the chain of photocathode detection, amplification through a microchannel plate PMT, excitation onto a phosphor screen and then imaged on the CCD. The analysis of single photon events requires careful interpretation of the fixed pattern noise, statistical quantum noise distributions, and the spatial spread of each pulse through the ICCD.
Single-Image Distance Measurement by a Smart Mobile Device.
Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling
2017-12-01
Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-01-01
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-02-09
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.
2016-10-11
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows dust devil tracks (dark blue linear feature) in Terra Cimmeria. Orbit Number: 43463 Latitude: -53.1551 Longitude: 125.069 Instrument: VIS Captured: 2011-10-01 23:55 http://photojournal.jpl.nasa.gov/catalog/PIA21009
2017-06-01
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Russell Crater in Noachis Terra. Orbit Number: 59591 Latitude: -54.471 Longitude: 13.1288 Instrument: VIS Captured: 2015-05-21 10:57 https://photojournal.jpl.nasa.gov/catalog/PIA21674
NASA Astrophysics Data System (ADS)
Liu, Ya-Cheng; Chung, Chien-Kai; Lai, Jyun-Yi; Chang, Han-Chao; Hsu, Feng-Yi
2013-06-01
Upper gastrointestinal endoscopies are primarily performed to observe the pathologies of the esophagus, stomach, and duodenum. However, when an endoscope is pushed into the esophagus or stomach by the physician, the organs behave similar to a balloon being gradually inflated. Consequently, their shapes and depth-of-field of images change continually, preventing thorough examination of the inflammation or anabrosis position, which delays the curing period. In this study, a 2.9-mm image-capturing module and a convoluted mechanism was incorporated into the tube like a standard 10- mm upper gastrointestinal endoscope. The scale-invariant feature transform (SIFT) algorithm was adopted to implement disease feature extraction on a koala doll. Following feature extraction, the smoothly varying affine stitching (SVAS) method was employed to resolve stitching distortion problems. Subsequently, the real-time splice software developed in this study was embedded in an upper gastrointestinal endoscope to obtain a panoramic view of stomach inflammation in the captured images. The results showed that the 2.9-mm image-capturing module can provide approximately 50 verified images in one spin cycle, a viewing angle of 120° can be attained, and less than 10% distortion can be achieved in each image. Therefore, these methods can solve the problems encountered when using a standard 10-mm upper gastrointestinal endoscope with a single camera, such as image distortion, and partial inflammation displays. The results also showed that the SIFT algorithm provides the highest correct matching rate, and the SVAS method can be employed to resolve the parallax problems caused by stitching together images of different flat surfaces.
Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera
NASA Astrophysics Data System (ADS)
Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.
2016-08-01
Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.
A quartz nanopillar hemocytometer for high-yield separation and counting of CD4+ T lymphocytes
NASA Astrophysics Data System (ADS)
Kim, Dong-Joo; Seol, Jin-Kyeong; Wu, Yu; Ji, Seungmuk; Kim, Gil-Sung; Hyung, Jung-Hwan; Lee, Seung-Yong; Lim, Hyuneui; Fan, Rong; Lee, Sang-Kwon
2012-03-01
We report the development of a novel quartz nanopillar (QNP) array cell separation system capable of selectively capturing and isolating a single cell population including primary CD4+ T lymphocytes from the whole pool of splenocytes. Integrated with a photolithographically patterned hemocytometer structure, the streptavidin (STR)-functionalized-QNP (STR-QNP) arrays allow for direct quantitation of captured cells using high content imaging. This technology exhibits an excellent separation yield (efficiency) of ~95.3 +/- 1.1% for the CD4+ T lymphocytes from the mouse splenocyte suspensions and good linear response for quantitating captured CD4+ T-lymphoblasts, which is comparable to flow cytometry and outperforms any non-nanostructured surface capture techniques, i.e. cell panning. This nanopillar hemocytometer represents a simple, yet efficient cell capture and counting technology and may find immediate applications for diagnosis and immune monitoring in the point-of-care setting.We report the development of a novel quartz nanopillar (QNP) array cell separation system capable of selectively capturing and isolating a single cell population including primary CD4+ T lymphocytes from the whole pool of splenocytes. Integrated with a photolithographically patterned hemocytometer structure, the streptavidin (STR)-functionalized-QNP (STR-QNP) arrays allow for direct quantitation of captured cells using high content imaging. This technology exhibits an excellent separation yield (efficiency) of ~95.3 +/- 1.1% for the CD4+ T lymphocytes from the mouse splenocyte suspensions and good linear response for quantitating captured CD4+ T-lymphoblasts, which is comparable to flow cytometry and outperforms any non-nanostructured surface capture techniques, i.e. cell panning. This nanopillar hemocytometer represents a simple, yet efficient cell capture and counting technology and may find immediate applications for diagnosis and immune monitoring in the point-of-care setting. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr11338d
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera
Chen, Hao; Woodward, Maria A.; Burke, David T.; Jeganathan, V. Swetha E.; Demirci, Hakan; Sick, Volker
2017-01-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring. PMID:29082081
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera.
Chen, Hao; Woodward, Maria A; Burke, David T; Jeganathan, V Swetha E; Demirci, Hakan; Sick, Volker
2017-10-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
Video Imaging System Particularly Suited for Dynamic Gear Inspection
NASA Technical Reports Server (NTRS)
Broughton, Howard (Inventor)
1999-01-01
A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.
Combining fluorescence imaging with Hi-C to study 3D genome architecture of the same single cell.
Lando, David; Basu, Srinjan; Stevens, Tim J; Riddell, Andy; Wohlfahrt, Kai J; Cao, Yang; Boucher, Wayne; Leeb, Martin; Atkinson, Liam P; Lee, Steven F; Hendrich, Brian; Klenerman, Dave; Laue, Ernest D
2018-05-01
Fluorescence imaging and chromosome conformation capture assays such as Hi-C are key tools for studying genome organization. However, traditionally, they have been carried out independently, making integration of the two types of data difficult to perform. By trapping individual cell nuclei inside a well of a 384-well glass-bottom plate with an agarose pad, we have established a protocol that allows both fluorescence imaging and Hi-C processing to be carried out on the same single cell. The protocol identifies 30,000-100,000 chromosome contacts per single haploid genome in parallel with fluorescence images. Contacts can be used to calculate intact genome structures to better than 100-kb resolution, which can then be directly compared with the images. Preparation of 20 single-cell Hi-C libraries using this protocol takes 5 d of bench work by researchers experienced in molecular biology techniques. Image acquisition and analysis require basic understanding of fluorescence microscopy, and some bioinformatics knowledge is required to run the sequence-processing tools described here.
Russell Crater Dunes - False Color
2017-07-07
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the large dune form on the floor of Russell Crater. Orbit Number: 59672 Latitude: -54.337 Longitude: 13.1087 Instrument: VIS Captured: 2015-05-28 02:39 https://photojournal.jpl.nasa.gov/catalog/PIA21701
Zhang, Quan Bin; Sun, Jing Ping; Gao, Rui Feng; Lee, Alex Pui-Wai; Feng, Yan Lin; Liu, Xiao Rong; Sheng, Wei; Liu, Feng; Yang, Xing Sheng; Fang, Fang; Yu, Cheuk-Man
2013-10-09
The lack of an accurate noninvasive method for assessing right ventricular (RV) volume and function has been a major deficiency of two-dimensional (2D) echocardiography. The aim of our study was to test the feasibility of single-beat full-volume capture with real-time three-dimensional echo (3DE) imaging system for the evaluation of RV volumes and function validated by cardiac magnetic resonance imaging (CMRI). Sixty-one subjects (16 normal subjects, 20 patients with hypertension, 16 patients with pulmonary heart disease and 9 patients with coronary heart disease) were studied. RV volume and function assessments using 3DE were compared with manual tracing with CMRI as the reference method. Fifty-nine of 61 patients (96.7%; 36 male, mean age, 62 ± 15 years) had adequate three-dimensional echocardiographic data sets for analysis. The mean RV end diastolic volume (EDV) was 105 ± 38 ml, end-systolic volume (ESV) was 60 ± 30 and RV ejection fraction (EF) was 44 ± 11% by CMRI; and EDV 103 ± 38 ml, ESV 60 ± 28 ml and RV EF 41 ± 13% by 3DE. The correlations and agreements between measurements estimated by two methods were acceptable. RV volumes and function can be analyzed with 3DE software in most of subjects with or without heart diseases, which is able to be estimated with single-beat full-volume capture with real-time 3DE compared with CMRI. © 2013.
Identifying and Overcoming Obstacles to Point-of-Care Data Collection for Eye Care Professionals
Lobach, David F.; Silvey, Garry M.; Macri, Jennifer M.; Hunt, Megan; Kacmaz, Roje O.; Lee, Paul P.
2005-01-01
Supporting data entry by clinicians is considered one of the greatest challenges in implementing electronic health records. In this paper we describe a formative evaluation study using three different methodologies through which we identified obstacles to point-of-care data entry for eye care and then used the formative process to develop and test solutions to overcome these obstacles. The greatest obstacles were supporting free text annotation of clinical observations and accommodating the creation of detailed diagrams in multiple colors. To support free text entry, we arrived at an approach that captures an image of a free text note and associates this image with related data elements in an encounter note. The detailed diagrams included a color pallet that allowed changing pen color with a single stroke and also captured the diagrams as an image associated with related data elements. During observed sessions with simulated patients, these approaches satisfied the clinicians’ documentation needs by capturing the full range of clinical complexity that arises in practice. PMID:16779083
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
Park, Keunyeol; Song, Minkyu
2018-01-01
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.
Park, Keunyeol; Song, Minkyu; Kim, Soo Youn
2018-02-24
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
Zhu, Zhonglin; Li, Guoan
2013-01-01
Fluoroscopic image technique, using either a single image or dual images, has been widely applied to measure in vivo human knee joint kinematics. However, few studies have compared the advantages of using single and dual fluoroscopic images. Furthermore, due to the size limitation of the image intensifiers, it is possible that only a portion of the knee joint could be captured by the fluoroscopy during dynamic knee joint motion. In this paper, we presented a systematic evaluation of an automatic 2D–3D image matching method in reproducing spatial knee joint positions using either single or dual fluoroscopic image techniques. The data indicated that for the femur and tibia, their spatial positions could be determined with an accuracy and precision less than 0.2 mm in translation and less than 0.4° in orientation when dual fluoroscopic images were used. Using single fluoroscopic images, the method could produce satisfactory accuracy in joint positions in the imaging plane (average up to 0.5 mm in translation and 1.3° in rotation), but large variations along the out-plane direction (in average up to 4.0 mm in translation and 2.28 in rotation). The precision of using single fluoroscopic images to determine the actual knee positions was worse than its accuracy obtained. The data also indicated that when using dual fluoroscopic image technique, if the knee joint outlines in one image were incomplete by 80%, the algorithm could still reproduce the joint positions with high precisions. PMID:21806411
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
2015-10-08
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the floor of Melas Chasma. The dark blue region in this false color image is sand dunes. Orbit Number: 12061 Latitude: -12.2215 Longitude: 289.105 Instrument: VIS Captured: 2004-09-02 10:11 http://photojournal.jpl.nasa.gov/catalog/PIA19793
NASA Astrophysics Data System (ADS)
Tseng, Yolanda D.; Wootton, Landon; Nyflot, Matthew; Apisarnthanarax, Smith; Rengan, Ramesh; Bloch, Charles; Sandison, George; St. James, Sara
2018-01-01
Four dimensional computed tomography (4DCT) scans are routinely used in radiation therapy to determine the internal treatment volume for targets that are moving (e.g. lung tumors). The use of these studies has allowed clinicians to create target volumes based upon the motion of the tumor during the imaging study. The purpose of this work is to determine if a target volume based on a single 4DCT scan at simulation is sufficient to capture thoracic motion. Phantom studies were performed to determine expected differences between volumes contoured on 4DCT scans and those on the evaluation CT scans (slow scans). Evaluation CT scans acquired during treatment of 11 patients were compared to the 4DCT scans used for treatment planning. The images were assessed to determine if the target remained within the target volume determined during the first 4DCT scan. A total of 55 slow scans were compared to the 11 planning 4DCT scans. Small differences were observed in phantom between the 4DCT volumes and the slow scan volumes, with a maximum of 2.9%, that can be attributed to minor differences in contouring and the ability of the 4DCT scan to adequately capture motion at the apex and base of the motion trajectory. Larger differences were observed in the patients studied, up to a maximum volume difference of 33.4%. These results demonstrate that a single 4DCT scan is not adequate to capture all thoracic motion throughout treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua, Xin; Szymanski, Craig; Wang, Zhaoying
2016-01-01
Chemical imaging of single cells is important in capturing biological dynamics. Single cell correlative imaging is realized between structured illumination microscopy (SIM) and time-of-flight secondary ion mass spectrometry (ToF-SIMS) using System for Analysis at the Liquid Vacuum Interface (SALVI), a multimodal microreactor. SIM characterized cells and guided subsequent ToF-SIMS analysis. Dynamic ToF-SIMS provided time- and space-resolved cell molecular mapping. Lipid fragments were identified in the hydrated cell membrane. Principal component analysis was used to elucidate chemical component differences among mouse lung cells that uptake zinc oxide nanoparticles. Our results provided submicron chemical spatial mapping for investigations of cell dynamics atmore » the molecular level.« less
Quantitative single-molecule imaging by confocal laser scanning microscopy.
Vukojevic, Vladana; Heidkamp, Marcus; Ming, Yu; Johansson, Björn; Terenius, Lars; Rigler, Rudolf
2008-11-25
A new approach to quantitative single-molecule imaging by confocal laser scanning microscopy (CLSM) is presented. It relies on fluorescence intensity distribution to analyze the molecular occurrence statistics captured by digital imaging and enables direct determination of the number of fluorescent molecules and their diffusion rates without resorting to temporal or spatial autocorrelation analyses. Digital images of fluorescent molecules were recorded by using fast scanning and avalanche photodiode detectors. In this way the signal-to-background ratio was significantly improved, enabling direct quantitative imaging by CLSM. The potential of the proposed approach is demonstrated by using standard solutions of fluorescent dyes, fluorescently labeled DNA molecules, quantum dots, and the Enhanced Green Fluorescent Protein in solution and in live cells. The method was verified by using fluorescence correlation spectroscopy. The relevance for biological applications, in particular, for live cell imaging, is discussed.
Járvás, Gábor; Varga, Tamás; Szigeti, Márton; Hajba, László; Fürjes, Péter; Rajta, István; Guttman, András
2018-02-01
As a continuation of our previously published work, this paper presents a detailed evaluation of a microfabricated cell capture device utilizing a doubly tilted micropillar array. The device was fabricated using a novel hybrid technology based on the combination of proton beam writing and conventional lithography techniques. Tilted pillars offer unique flow characteristics and support enhanced fluidic interaction for improved immunoaffinity based cell capture. The performance of the microdevice was evaluated by an image sequence analysis based in-house developed single-cell tracking system. Individual cell tracking allowed in-depth analysis of the cell-chip surface interaction mechanism from hydrodynamic point of view. Simulation results were validated by using the hybrid device and the optimized surface functionalization procedure. Finally, the cell capture capability of this new generation microdevice was demonstrated by efficiently arresting cells from a HT29 cell-line suspension. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dual-slit confocal light sheet microscopy for in vivo whole-brain imaging of zebrafish
Yang, Zhe; Mei, Li; Xia, Fei; Luo, Qingming; Fu, Ling; Gong, Hui
2015-01-01
In vivo functional imaging at single-neuron resolution is an important approach to visualize biological processes in neuroscience. Light sheet microscopy (LSM) is a cutting edge in vivo imaging technique that provides micron-scale spatial resolution at high frame rate. Due to the scattering and absorption of tissue, however, conventional LSM is inadequate to resolve cells because of the attenuated signal to noise ratio (SNR). Using dual-beam illumination and confocal dual-slit detection, here a dual-slit confocal LSM is demonstrated to obtain the SNR enhanced images with frame rate twice as high as line confocal LSM method. Through theoretical calculations and experiments, the correlation between the slit’s width and SNR was determined to optimize the image quality. In vivo whole brain structural imaging stacks and the functional imaging sequences of single slice were obtained for analysis of calcium activities at single-cell resolution. A two-fold increase in imaging speed of conventional confocal LSM makes it possible to capture the sequence of the neurons’ activities and help reveal the potential functional connections in the whole zebrafish’s brain. PMID:26137381
Simultaneous optical and electrical recording of a single ion-channel.
Ide, Toru; Takeuchi, Yuko; Aoki, Takaaki; Yanagida, Toshio
2002-10-01
In recent years, the single-molecule imaging technique has proven to be a valuable tool in solving many basic problems in biophysics. The technique used to measure single-molecule functions was initially developed to study electrophysiological properties of channel proteins. However, the technology to visualize single channels at work has not received as much attention. In this study, we have for the first time, simultaneously measured the optical and electrical properties of single-channel proteins. The large conductance calcium-activated potassium channel (BK-channel) labeled with fluorescent dye molecules was incorporated into a planar bilayer membrane and the fluorescent image captured with a total internal reflection fluorescence microscope simultaneously with single-channel current recording. This innovative technology will greatly advance the study of channel proteins as well as signal transduction processes that involve ion permeation processes.
A design of a high speed dual spectrometer by single line scan camera
NASA Astrophysics Data System (ADS)
Palawong, Kunakorn; Meemon, Panomsak
2018-03-01
A spectrometer that can capture two orthogonal polarization components of s light beam is demanded for polarization sensitive imaging system. Here, we describe the design and implementation of a high speed spectrometer for simultaneous capturing of two orthogonal polarization components, i.e. vertical and horizontal components, of light beam. The design consists of a polarization beam splitter, two polarization-maintain optical fibers, two collimators, a single line-scan camera, a focusing lens, and a reflection blaze grating. The alignment of two beam paths was designed to be symmetrically incident on the blaze side and reverse blaze side of reflection grating, respectively. The two diffracted beams were passed through the same focusing lens and focused on the single line-scan sensors of a CMOS camera. The two spectra of orthogonal polarization were imaged on 1000 pixels per spectrum. With the proposed setup, the amplitude and shape of the two detected spectra can be controlled by rotating the collimators. The technique for optical alignment of spectrometer will be presented and discussed. The two orthogonal polarization spectra can be simultaneously captured at a speed of 70,000 spectra per second. The high speed dual spectrometer can simultaneously detected two orthogonal polarizations, which is an important component for the development of polarization-sensitive optical coherence tomography. The performance of the spectrometer have been measured and analyzed.
Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.
Schroder, Kai; Zinke, Arno; Klein, Reinhard
2015-02-01
Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.
Correlation Plenoptic Imaging.
D'Angelo, Milena; Pepe, Francesco V; Garuccio, Augusto; Scarcelli, Giuliano
2016-06-03
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.
NASA Astrophysics Data System (ADS)
D'Angelo, Milena; Pepe, Francesco V.; Garuccio, Augusto; Scarcelli, Giuliano
2016-06-01
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.
Danly, C R; Day, T H; Fittinghoff, D N; Herrmann, H; Izumi, N; Kim, Y H; Martinez, J I; Merrill, F E; Schmidt, D W; Simpson, R A; Volegov, P L; Wilde, C H
2015-04-01
Neutron and x-ray imaging provide critical information about the geometry and hydrodynamics of inertial confinement fusion implosions. However, existing diagnostics at Omega and the National Ignition Facility (NIF) cannot produce images in both neutrons and x-rays along the same line of sight. This leads to difficulty comparing these images, which capture different parts of the plasma geometry, for the asymmetric implosions seen in present experiments. Further, even when opposing port neutron and x-ray images are available, they use different detectors and cannot provide positive information about the relative positions of the neutron and x-ray sources. A technique has been demonstrated on implosions at Omega that can capture x-ray images along the same line of sight as the neutron images. The technique is described, and data from a set of experiments are presented, along with a discussion of techniques for coregistration of the various images. It is concluded that the technique is viable and could provide valuable information if implemented on NIF in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danly, C. R.; Day, T. H.; Fittinghoff, D. N.
Neutron and x-ray imaging provide critical information about the geometry and hydrodynamics of inertial confinement fusion implosions. However, existing diagnostics at Omega and the National Ignition Facility (NIF) cannot produce images in both neutrons and x-rays along the same line of sight. This leads to difficulty comparing these images, which capture different parts of the plasma geometry, for the asymmetric implosions seen in present experiments. Further, even when opposing port neutron and x-ray images are available, they use different detectors and cannot provide positive information about the relative positions of the neutron and x-ray sources. A technique has been demonstratedmore » on implosions at Omega that can capture x-ray images along the same line of sight as the neutron images. Thus, the technique is described, and data from a set of experiments are presented, along with a discussion of techniques for coregistration of the various images. It is concluded that the technique is viable and could provide valuable information if implemented on NIF in the near future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danly, C. R.; Day, T. H.; Herrmann, H.
Neutron and x-ray imaging provide critical information about the geometry and hydrodynamics of inertial confinement fusion implosions. However, existing diagnostics at Omega and the National Ignition Facility (NIF) cannot produce images in both neutrons and x-rays along the same line of sight. This leads to difficulty comparing these images, which capture different parts of the plasma geometry, for the asymmetric implosions seen in present experiments. Further, even when opposing port neutron and x-ray images are available, they use different detectors and cannot provide positive information about the relative positions of the neutron and x-ray sources. A technique has been demonstratedmore » on implosions at Omega that can capture x-ray images along the same line of sight as the neutron images. The technique is described, and data from a set of experiments are presented, along with a discussion of techniques for coregistration of the various images. It is concluded that the technique is viable and could provide valuable information if implemented on NIF in the near future.« less
Danly, C. R.; Day, T. H.; Fittinghoff, D. N.; ...
2015-04-16
Neutron and x-ray imaging provide critical information about the geometry and hydrodynamics of inertial confinement fusion implosions. However, existing diagnostics at Omega and the National Ignition Facility (NIF) cannot produce images in both neutrons and x-rays along the same line of sight. This leads to difficulty comparing these images, which capture different parts of the plasma geometry, for the asymmetric implosions seen in present experiments. Further, even when opposing port neutron and x-ray images are available, they use different detectors and cannot provide positive information about the relative positions of the neutron and x-ray sources. A technique has been demonstratedmore » on implosions at Omega that can capture x-ray images along the same line of sight as the neutron images. Thus, the technique is described, and data from a set of experiments are presented, along with a discussion of techniques for coregistration of the various images. It is concluded that the technique is viable and could provide valuable information if implemented on NIF in the near future.« less
Chang, Sung-A; Kim, Hyung-Kwan; Lee, Sang-Chol; Kim, Eun-Young; Hahm, Seung-Hee; Kwon, Oh Min; Park, Seung Woo; Choe, Yeon Hyeon; Oh, Jae K
2013-04-01
Left ventricular (LV) mass is an important prognostic indicator in hypertrophic cardiomyopathy. Although LV mass can be easily calculated using conventional echocardiography, it is based on geometric assumptions and has inherent limitations in asymmetric left ventricles. Real-time three-dimensional echocardiographic (RT3DE) imaging with single-beat capture provides an opportunity for the accurate estimation of LV mass. The aim of this study was to validate this new technique for LV mass measurement in patients with hypertrophic cardiomyopathy. Sixty-nine patients with adequate two-dimensional (2D) and three-dimensional echocardiographic image quality underwent cardiac magnetic resonance (CMR) imaging and echocardiography on the same day. Real-time three-dimensional echocardiographic images were acquired using an Acuson SC2000 system, and CMR-determined LV mass was considered the reference standard. Left ventricular mass was derived using the formula of the American Society of Echocardiography (M-mode mass), the 2D-based truncated ellipsoid method (2D mass), and the RT3DE technique (RT3DE mass). The mean time for RT3DE analysis was 5.85 ± 1.81 min. Intraclass correlation analysis showed a close relationship between RT3DE and CMR LV mass (r = 0.86, P < .0001). However, LV mass by the M-mode or 2D technique showed a smaller intraclass correlation coefficient compared with CMR-determined mass (r = 0.48, P = .01, and r = 0.71, P < .001, respectively). Bland-Altman analysis showed reasonable limits of agreement between LV mass by RT3DE imaging and by CMR, with a smaller positive bias (19.5 g [9.1%]) compared with that by the M-mode and 2D methods (-35.1 g [-20.2%] and 30.6 g [17.6%], respectively). RT3DE measurement of LV mass using the single-beat capture technique is practical and more accurate than 2D or M-mode LV mass in patients with hypertrophic cardiomyopathy. Copyright © 2013 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Wei; Tian, Xiaolin; He, Xiaoliang; Song, Xiaojun; Xue, Liang; Liu, Cheng; Wang, Shouyu
2016-08-01
Microscopy based on transport of intensity equation provides quantitative phase distributions which opens another perspective for cellular observations. However, it requires multi-focal image capturing while mechanical and electrical scanning limits its real time capacity in sample detections. Here, in order to break through this restriction, real time quantitative phase microscopy based on single-shot transport of the intensity equation method is proposed. A programmed phase mask is designed to realize simultaneous multi-focal image recording without any scanning; thus, phase distributions can be quantitatively retrieved in real time. It is believed the proposed method can be potentially applied in various biological and medical applications, especially for live cell imaging.
Real-time computer treatment of THz passive device images with the high image quality
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
Bio-inspired color image enhancement
NASA Astrophysics Data System (ADS)
Meylan, Laurence; Susstrunk, Sabine
2004-06-01
Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.
Application of whole slide image markup and annotation for pathologist knowledge capture.
Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H
2013-01-01
The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.
Application of whole slide image markup and annotation for pathologist knowledge capture
Campbell, Walter S.; Foster, Kirk W.; Hinrichs, Steven H.
2013-01-01
Objective: The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Methods: Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Results: Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). Conclusion: This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use. PMID:23599902
Performance characterization of structured light-based fingerprint scanner
NASA Astrophysics Data System (ADS)
Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.
2013-05-01
Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.
NASA Astrophysics Data System (ADS)
Li, Qingyun; Karnowski, Karol; Villiger, Martin; Sampson, David D.
2017-04-01
A fibre-based full-range polarisation-sensitive optical coherence tomography system is developed to enable complete capture of the structural and birefringence properties of the anterior segment of the human eye in a single acquisition. The system uses a wavelength swept source centered at 1.3 μm, passively depth-encoded, orthogonal polarisation states in the illumination path and polarisation-diversity detection. Off-pivot galvanometer scanning is used to extend the imaging range and compensate for sensitivity drop-off. A Mueller matrix-based method is used to analyse data. We demonstrate the performance of the system and discuss issues relating to its optimisation.
NASA Astrophysics Data System (ADS)
Kwok, Ngaiming; Shi, Haiyan; Peng, Yeping; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Rahman, Md Arifur
2018-04-01
Restoring images captured under low-illuminations is an essential front-end process for most image based applications. The Center-Surround Retinex algorithm has been a popular approach employed to improve image brightness. However, this algorithm in its basic form, is known to produce color degradations. In order to mitigate this problem, here the Single-Scale Retinex algorithm is modifid as an edge extractor while illumination is recovered through a non-linear intensity mapping stage. The derived edges are then integrated with the mapped image to produce the enhanced output. Furthermore, in reducing color distortion, the process is conducted in the magnitude sorted domain instead of the conventional Red-Green-Blue (RGB) color channels. Experimental results had shown that improvements with regard to mean brightness, colorfulness, saturation, and information content can be obtained.
Comparison of Wide-Field Fluorescein Angiography and Nine-Field Montage Angiography in Uveitis
Nicholson, Benjamin P.; Nigam, Divya; Miller, Darby; Agrón, Elvira; Dalal, Monica; Jacobs-El, Naima; Lima, Breno da Rocha; Cunningham, Denise; Nussenblatt, Robert; Sen, H. Nida
2014-01-01
Purpose To qualitatively and quantitatively compare Optos© fundus camera fluorescein angiographic images of retinal vascular leakage with 9-field montage Topcon© fluorescein angiography (FA) images in patients with uveitis. We hypothesized that Optos images reveal more leakage in uveitis patients. Design Retrospective, observational case series. Methods Images of all uveitis patients imaged with same-sitting Optos FA and 9-field montage FA during a 9 month period at a single institution (52 eyes of 31 patients) were graded for the total area of retinal vascular leakage. The main outcome measure was area of fluorescein leakage. Results The area of apparent FA leakage was greater in Optos images than in 9-field montage images (median 22.5 mm2 vs. 4.8 mm2, P<0.0001). Twenty-two of 49 (45%) eyes with gradable photos had at least 25% more leakage on the Optos image than on the montage image. Two (4.1%) had at least 25% less leakage on Optos, and 25 (51%) were similar between the two modalities. Two eyes had no apparent retinal vascular leakage on 9-field montage but were found to have apparent leakage on Optos images. Twenty-three of the 49 eyes had posterior pole leakage, and of these 17 (73.9%) showed more posterior pole leakage on the Optos image. A single 200 degree Optos FA image captured a mean 1.50x the area captured by montage photography. Conclusion More retinal vascular pathology, both in the periphery and the posterior pole, is seen with Optos FA in uveitis patients when compared with 9-field montage. The clinical implications of Optos FA findings have yet to be determined. PMID:24321475
2017-02-15
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Gale Crater. Basaltic sands are dark blue in this type of false color combination. The Curiosity Rover is located in another portion of Gale Crater, far southwest of this image. Orbit Number: 51803 Latitude: -4.39948 Longitude: 138.116 Instrument: VIS Captured: 2013-08-18 09:04 http://photojournal.jpl.nasa.gov/catalog/PIA21312
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
NASA Astrophysics Data System (ADS)
Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong
2017-12-01
Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.
Accurate reconstruction of hyperspectral images from compressive sensing measurements
NASA Astrophysics Data System (ADS)
Greer, John B.; Flake, J. C.
2013-05-01
The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.
Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa
2016-08-08
We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.
Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter
NASA Astrophysics Data System (ADS)
Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.
2013-02-01
Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2012-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2013-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings. PMID:23990697
Developing stereo image based robot control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suprijadi,; Pambudi, I. R.; Woran, M.
Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based onmore » stereovision captures.« less
Fast Fourier single-pixel imaging via binary illumination.
Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang
2017-09-20
Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.
Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock.
Gerke, Kirill M; Karsanina, Marina V; Mallants, Dirk
2015-11-02
Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing.
Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock
Gerke, Kirill M.; Karsanina, Marina V.; Mallants, Dirk
2015-01-01
Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing. PMID:26522938
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Markerless identification of key events in gait cycle using image flow.
Vishnoi, Nalini; Duric, Zoran; Gerber, Naomi Lynn
2012-01-01
Gait analysis has been an interesting area of research for several decades. In this paper, we propose image-flow-based methods to compute the motion and velocities of different body segments automatically, using a single inexpensive video camera. We then identify and extract different events of the gait cycle (double-support, mid-swing, toe-off and heel-strike) from video images. Experiments were conducted in which four walking subjects were captured from the sagittal plane. Automatic segmentation was performed to isolate the moving body from the background. The head excursion and the shank motion were then computed to identify the key frames corresponding to different events in the gait cycle. Our approach does not require calibrated cameras or special markers to capture movement. We have also compared our method with the Optotrak 3D motion capture system and found our results in good agreement with the Optotrak results. The development of our method has potential use in the markerless and unencumbered video capture of human locomotion. Monitoring gait in homes and communities provides a useful application for the aged and the disabled. Our method could potentially be used as an assessment tool to determine gait symmetry or to establish the normal gait pattern of an individual.
Single camera volumetric velocimetry in aortic sinus with a percutaneous valve
NASA Astrophysics Data System (ADS)
Clifford, Chris; Thurow, Brian; Midha, Prem; Okafor, Ikechukwu; Raghav, Vrishank; Yoganathan, Ajit
2016-11-01
Cardiac flows have long been understood to be highly three dimensional, yet traditional in vitro techniques used to capture these complexities are costly and cumbersome. Thus, two dimensional techniques are primarily used for heart valve flow diagnostics. The recent introduction of plenoptic camera technology allows for traditional cameras to capture both spatial and angular information from a light field through the addition of a microlens array in front of the image sensor. When combined with traditional particle image velocimetry (PIV) techniques, volumetric velocity data may be acquired with a single camera using off-the-shelf optics. Particle volume pairs are reconstructed from raw plenoptic images using a filtered refocusing scheme, followed by three-dimensional cross-correlation. This technique was applied to the sinus region (known for having highly three-dimensional flow structures) of an in vitro aortic model with a percutaneous valve. Phase-locked plenoptic PIV data was acquired at two cardiac outputs (2 and 5 L/min) and 7 phases of the cardiac cycle. The volumetric PIV data was compared to standard 2D-2C PIV. Flow features such as recirculation and stagnation were observed in the sinus region in both cases.
Convolutional Sparse Coding for RGB+NIR Imaging.
Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon
2018-04-01
Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.
NASA Astrophysics Data System (ADS)
Jaiswal, Mayoore; Horning, Matt; Hu, Liming; Ben-Or, Yau; Champlin, Cary; Wilson, Benjamin; Levitz, David
2018-02-01
Cervical cancer is the fourth most common cancer among women worldwide and is especially prevalent in low resource settings due to lack of screening and treatment options. Visual inspection with acetic acid (VIA) is a widespread and cost-effective screening method for cervical pre-cancer lesions, but accuracy depends on the experience level of the health worker. Digital cervicography, capturing images of the cervix, enables review by an off-site expert or potentially a machine learning algorithm. These reviews require images of sufficient quality. However, image quality varies greatly across users. A novel algorithm was developed to evaluate the sharpness of images captured with the MobileODT's digital cervicography device (EVA System), in order to, eventually provide feedback to the health worker. The key challenges are that the algorithm evaluates only a single image of each cervix, it needs to be robust to the variability in cervix images and fast enough to run in real time on a mobile device, and the machine learning model needs to be small enough to fit on a mobile device's memory, train on a small imbalanced dataset and run in real-time. In this paper, the focus scores of a preprocessed image and a Gaussian-blurred version of the image are calculated using established methods and used as features. A feature selection metric is proposed to select the top features which were then used in a random forest classifier to produce the final focus score. The resulting model, based on nine calculated focus scores, achieved significantly better accuracy than any single focus measure when tested on a holdout set of images. The area under the receiver operating characteristics curve was 0.9459.
Improved depth estimation with the light field camera
NASA Astrophysics Data System (ADS)
Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display
A single-pixel X-ray imager concept and its application to secure radiographic inspections
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.; ...
2017-07-01
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. But, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. We built this method on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. Particularly, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
A four-lens based plenoptic camera for depth measurements
NASA Astrophysics Data System (ADS)
Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe
2015-04-01
In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.
A single-pixel X-ray imager concept and its application to secure radiographic inspections
NASA Astrophysics Data System (ADS)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.; White, Timothy A.; Pitts, William Karl; Jarman, Kenneth D.; Seifert, Allen
2017-07-01
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. However, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. The method is built on the theory of compressive sensing and the single pixel optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. In particular, it is found that an inspection with low noise ( < 1 %) and high undersampling ( > 256 ×) exhibits high robustness and security.
Utilizing Light-field Imaging Technology in Neurosurgery.
Chen, Brian R; Buchanan, Ian A; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe Ii, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-04-10
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education.
Utilizing Light-field Imaging Technology in Neurosurgery
Chen, Brian R; Kellis, Spencer; Kramer, Daniel; Ohiorhenuan, Ifije; Blumenfeld, Zack; Grisafe II, Dominic J; Barbaro, Michael F; Gogia, Angad S; Lu, James Y; Chen, Beverly B; Lee, Brian
2018-01-01
Traditional still cameras can only focus on a single plane for each image while rendering everything outside of that plane out of focus. However, new light-field imaging technology makes it possible to adjust the focus plane after an image has already been captured. This technology allows the viewer to interactively explore an image with objects and anatomy at varying depths and clearly focus on any feature of interest by selecting that location during post-capture viewing. These images with adjustable focus can serve as valuable educational tools for neurosurgical residents. We explore the utility of light-field cameras and review their strengths and limitations compared to other conventional types of imaging. The strength of light-field images is the adjustable focus, as opposed to the fixed-focus of traditional photography and video. A light-field image also is interactive by nature, as it requires the viewer to select the plane of focus and helps with visualizing the three-dimensional anatomy of an image. Limitations include the relatively low resolution of light-field images compared to traditional photography and video. Although light-field imaging is still in its infancy, there are several potential uses for the technology to complement traditional still photography and videography in neurosurgical education. PMID:29888163
Virtual view image synthesis for eye-contact in TV conversation system
NASA Astrophysics Data System (ADS)
Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae
2010-02-01
Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.
Fast single image dehazing based on image fusion
NASA Astrophysics Data System (ADS)
Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian
2015-01-01
Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.
X-ray ‘ghost images’ could cut radiation doses
NASA Astrophysics Data System (ADS)
Chen, Sophia
2018-03-01
On its own, a single-pixel camera captures pictures that are pretty dull: squares that are completely black, completely white, or some shade of gray in between. All it does, after all, is detect brightness. Yet by connecting a single-pixel camera to a patterned light source, a team of physicists in China has made detailed x-ray images using a statistical technique called ghost imaging, first pioneered 20 years ago in infrared and visible light. Researchers in the field say future versions of this system could take clear x-ray photographs with cheap cameras—no need for lenses and multipixel detectors—and less cancer-causing radiation than conventional techniques.
Super-resolved all-refocused image with a plenoptic camera
NASA Astrophysics Data System (ADS)
Wang, Xiang; Li, Lin; Hou, Guangqi
2015-12-01
This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
Eye vergence responses during a visual memory task.
Solé Puig, Maria; Romeo, August; Cañete Crespillo, Jose; Supèr, Hans
2017-02-08
In a previous report it was shown that covertly attending visual stimuli produce small convergence of the eyes, and that visual stimuli can give rise to different modulations of the angle of eye vergence, depending on their power to capture attention. Working memory is highly dependent on attention. Therefore, in this study we assessed vergence responses in a memory task. Participants scanned a set of 8 or 12 images for 10 s, and thereafter were presented with a series of single images. One half were repeat images - that is, they belonged to the initial set - and the other half were novel images. Participants were asked to indicate whether or not the images were included in the initial image set. We observed that eyes converge during scanning the set of images and during the presentation of the single images. The convergence was stronger for remembered images compared with the vergence for nonremembered images. Modulation in pupil size did not correspond to behavioural responses. The correspondence between vergence and coding/retrieval processes of memory strengthen the idea of a role for vergence in attention processing of visual information.
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Whole surface image reconstruction for machine vision inspection of fruit
NASA Astrophysics Data System (ADS)
Reese, D. Y.; Lefcourt, A. M.; Kim, M. S.; Lo, Y. M.
2007-09-01
Automated imaging systems offer the potential to inspect the quality and safety of fruits and vegetables consumed by the public. Current automated inspection systems allow fruit such as apples to be sorted for quality issues including color and size by looking at a portion of the surface of each fruit. However, to inspect for defects and contamination, the whole surface of each fruit must be imaged. The goal of this project was to develop an effective and economical method for whole surface imaging of apples using mirrors and a single camera. Challenges include mapping the concave stem and calyx regions. To allow the entire surface of an apple to be imaged, apples were suspended or rolled above the mirrors using two parallel music wires. A camera above the apples captured 90 images per sec (640 by 480 pixels). Single or multiple flat or concave mirrors were mounted around the apple in various configurations to maximize surface imaging. Data suggest that the use of two flat mirrors provides inadequate coverage of a fruit but using two parabolic concave mirrors allows the entire surface to be mapped. Parabolic concave mirrors magnify images, which results in greater pixel resolution and reduced distortion. This result suggests that a single camera with two parabolic concave mirrors can be a cost-effective method for whole surface imaging.
NASA Astrophysics Data System (ADS)
Pogue, B. W.; Krishnaswamy, V.; Jermyn, M.; Bruza, P.; Miao, T.; Ware, William; Saunders, S. L.; Andreozzi, J. M.; Gladstone, D. J.; Jarvis, L. A.
2017-05-01
Cherenkov imaging has been shown to allow near real time imaging of the beam entrance and exit on patient tissue, with the appropriate intensified camera and associated image processing. A dedicated system has been developed for research into full torso imaging of whole breast irradiation, where the dual camera system captures the beam shape for all beamlets used in this treatment protocol. Particularly challenging verification measurement exists in dynamic wedge, field in field, and boost delivery, and the system was designed to capture these as they are delivered. Two intensified CMOS (ICMOS) cameras were developed and mounted in a breast treatment room, and pilot studies for intensity and stability were completed. Software tools to contour the treatment area have been developed and are being tested prior to initiation of the full trial. At present, it is possible to record delivery of individual beamlets as small as a single MLC thickness, and readout at 20 frames per second is achieved. Statistical analysis of system repeatibilty and stability is presented, as well as pilot human studies.
Wu, Yicong; Chandris, Panagiotis; Winter, Peter W.; Kim, Edward Y.; Jaumouillé, Valentin; Kumar, Abhishek; Guo, Min; Leung, Jacqueline M.; Smith, Corey; Rey-Suarez, Ivan; Liu, Huafeng; Waterman, Clare M.; Ramamurthi, Kumaran S.; La Riviere, Patrick J.; Shroff, Hari
2016-01-01
Most fluorescence microscopes are inefficient, collecting only a small fraction of the emitted light at any instant. Besides wasting valuable signal, this inefficiency also reduces spatial resolution and causes imaging volumes to exhibit significant resolution anisotropy. We describe microscopic and computational techniques that address these problems by simultaneously capturing and subsequently fusing and deconvolving multiple specimen views. Unlike previous methods that serially capture multiple views, our approach improves spatial resolution without introducing any additional illumination dose or compromising temporal resolution relative to conventional imaging. When applying our methods to single-view wide-field or dual-view light-sheet microscopy, we achieve a twofold improvement in volumetric resolution (~235 nm × 235 nm × 340 nm) as demonstrated on a variety of samples including microtubules in Toxoplasma gondii, SpoVM in sporulating Bacillus subtilis, and multiple protein distributions and organelles in eukaryotic cells. In every case, spatial resolution is improved with no drawback by harnessing previously unused fluorescence. PMID:27761486
Witmer, Matthew T; Parlitsis, George; Patel, Sarju; Kiss, Szilárd
2013-01-01
To compare ultra-widefield fluorescein angiography imaging using the Optos(®) Optomap(®) and the Heidelberg Spectralis(®) noncontact ultra-widefield module. Five patients (ten eyes) underwent ultra-widefield fluorescein angiography using the Optos(®) panoramic P200Tx imaging system and the noncontact ultra-widefield module in the Heidelberg Spectralis(®) HRA+OCT system. The images were obtained as a single, nonsteered shot centered on the macula. The area of imaged retina was outlined and quantified using Adobe(®) Photoshop(®) C5 software. The total area and area within each of four visualized quadrants was calculated and compared between the two imaging modalities. Three masked reviewers also evaluated each quadrant per eye (40 total quadrants) to determine which modality imaged the retinal vasculature most peripherally. Optos(®) imaging captured a total retinal area averaging 151,362 pixels, ranging from 116,998 to 205,833 pixels, while the area captured using the Heidelberg Spectralis(®) was 101,786 pixels, ranging from 73,424 to 116,319 (P = 0.0002). The average area per individual quadrant imaged by Optos(®) versus the Heidelberg Spectralis(®) superiorly was 32,373 vs 32,789 pixels, respectively (P = 0.91), inferiorly was 24,665 vs 26,117 pixels, respectively (P = 0.71), temporally was 47,948 vs 20,645 pixels, respectively (P = 0.0001), and nasally was 46,374 vs 22,234 pixels, respectively (P = 0.0001). The Heidelberg Spectralis(®) was able to image the superior and inferior retinal vasculature to a more distal point than was the Optos(®), in nine of ten eyes (18 of 20 quadrants). The Optos(®) was able to image the nasal and temporal retinal vasculature to a more distal point than was the Heidelberg Spectralis(®), in ten of ten eyes (20 of 20 quadrants). The ultra-widefield fluorescein angiography obtained with the Optos(®) and Heidelberg Spectralis(®) ultra-widefield imaging systems are both excellent modalities that provide views of the peripheral retina. On a single nonsteered image, the Optos(®) Optomap(®) covered a significantly larger total retinal surface area, with greater image variability, than did the Heidelberg Spectralis(®) ultra-widefield module. The Optos(®) captured an appreciably wider view of the retina temporally and nasally, albeit with peripheral distortion, while the ultra-widefield Heidelberg Spectralis(®) module was able to image the superior and inferior retinal vasculature more peripherally. The clinical significance of these findings as well as the area imaged on steered montaged images remains to be determined.
Performance assessment of a single-pixel compressive sensing imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2016-05-01
Conventional electro-optical and infrared (EO/IR) systems capture an image by measuring the light incident at each of the millions of pixels in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process known as sparse reconstruction to recover the image as if a fully populated array that satisfies the Nyquist criteria was used. Therefore, CS operates under the assumption that signal acquisition and data compression can be accomplished simultaneously. CS has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations (SWaP-C), reconstruction accuracy, and reconstruction speed to meet operational requirements. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of the two-handheld object target set at range was collected using a passive SWIR single-pixel CS camera for various ranges, mirror resolution, and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NV-IPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques will be discussed.
Low-cost, high-performance and efficiency computational photometer design
NASA Astrophysics Data System (ADS)
Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly
2014-05-01
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
Dendrimer probes for enhanced photostability and localization in fluorescence imaging.
Kim, Younghoon; Kim, Sung Hoon; Tanyeri, Melikhan; Katzenellenbogen, John A; Schroeder, Charles M
2013-04-02
Recent advances in fluorescence microscopy have enabled high-resolution imaging and tracking of single proteins and biomolecules in cells. To achieve high spatial resolutions in the nanometer range, bright and photostable fluorescent probes are critically required. From this view, there is a strong need for development of advanced fluorescent probes with molecular-scale dimensions for fluorescence imaging. Polymer-based dendrimer nanoconjugates hold strong potential to serve as versatile fluorescent probes due to an intrinsic capacity for tailored spectral properties such as brightness and emission wavelength. In this work, we report a new, to our knowledge, class of molecular probes based on dye-conjugated dendrimers for fluorescence imaging and single-molecule fluorescence microscopy. We engineered fluorescent dendritic nanoprobes (FDNs) to contain multiple organic dyes and reactive groups for target-specific biomolecule labeling. The photophysical properties of dye-conjugated FDNs (Cy5-FDNs and Cy3-FDNs) were characterized using single-molecule fluorescence microscopy, which revealed greatly enhanced photostability, increased probe brightness, and improved localization precision in high-resolution fluorescence imaging compared to single organic dyes. As proof-of-principle demonstration, Cy5-FDNs were used to assay single-molecule nucleic acid hybridization and for immunofluorescence imaging of microtubules in cytoskeletal networks. In addition, Cy5-FDNs were used as reporter probes in a single-molecule protein pull-down assay to characterize antibody binding and target protein capture. In all cases, the photophysical properties of FDNs resulted in enhanced fluorescence imaging via improved brightness and/or photostability. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Q-sort assessment vs visual analog scale in the evaluation of smile esthetics.
Schabel, Brian J; McNamara, James A; Franchi, Lorenzo; Baccetti, Tiziano
2009-04-01
This study was designed to compare the reliability of the Q-sort and visual analog scale (VAS) methods for the assessment of smile esthetics. Furthermore, agreement between orthodontists and parents of orthodontic patients, and between male and female raters, was assessed in terms of subjective evaluation of the smile. Clinical photographs and digital video captures of 48 orthodontically treated patients were rated by 2 panels: 25 experienced orthodontists (15 men, 10 women) and 20 parents of the patients (8 men, 12 women). Interrater reliability of the Q-sort and VAS methods was evaluated by using single-measure and average-measure intraclass correlation (ICC). Kappa agreement and the McNemar test were used to evaluate agreement between orthodontists and parents, and between men and women, for "attractive" and "unattractive" images of smiles captured with clinical photography. The single-measure ICC coefficients showed fair to good reliability of the Q-sort and poor reliability of the VAS for measuring esthetic preferences of an individual orthodontist or parent. Both rating groups agreed significantly (P >0.05) on the total percentage of "attractive" images of smiles captured with clinical photography. Men and women, however, significantly disagreed on the total percentages of "attractive" and "unattractive" smiles. Women rated higher percentages of both image groups as "attractive" than did their male counterparts. The Q-sort was more reliable than the VAS for measuring smile esthetics. Orthodontists and parents of orthodontic patients agreed with respect to "attractive" and "unattractive" smiles. Men and women agreed poorly with respect to "attractive" and "unattractive" smiles.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
High Dynamic Range Pixel Array Detector for Scanning Transmission Electron Microscopy.
Tate, Mark W; Purohit, Prafull; Chamberlain, Darol; Nguyen, Kayla X; Hovden, Robert; Chang, Celesta S; Deb, Pratiti; Turgut, Emrah; Heron, John T; Schlom, Darrell G; Ralph, Daniel C; Fuchs, Gregory D; Shanks, Katherine S; Philipp, Hugh T; Muller, David A; Gruner, Sol M
2016-02-01
We describe a hybrid pixel array detector (electron microscope pixel array detector, or EMPAD) adapted for use in electron microscope applications, especially as a universal detector for scanning transmission electron microscopy. The 128×128 pixel detector consists of a 500 µm thick silicon diode array bump-bonded pixel-by-pixel to an application-specific integrated circuit. The in-pixel circuitry provides a 1,000,000:1 dynamic range within a single frame, allowing the direct electron beam to be imaged while still maintaining single electron sensitivity. A 1.1 kHz framing rate enables rapid data collection and minimizes sample drift distortions while scanning. By capturing the entire unsaturated diffraction pattern in scanning mode, one can simultaneously capture bright field, dark field, and phase contrast information, as well as being able to analyze the full scattering distribution, allowing true center of mass imaging. The scattering is recorded on an absolute scale, so that information such as local sample thickness can be directly determined. This paper describes the detector architecture, data acquisition system, and preliminary results from experiments with 80-200 keV electron beams.
Two-photon voltage imaging using a genetically encoded voltage indicator
Akemann, Walther; Sasaki, Mari; Mutoh, Hiroki; Imamura, Takeshi; Honkura, Naoki; Knöpfel, Thomas
2013-01-01
Voltage-sensitive fluorescent proteins (VSFPs) are a family of genetically-encoded voltage indicators (GEVIs) reporting membrane voltage fluctuation from genetically-targeted cells in cell cultures to whole brains in awake mice as demonstrated earlier using 1-photon (1P) fluorescence excitation imaging. However, in-vivo 1P imaging captures optical signals only from superficial layers and does not optically resolve single neurons. Two-photon excitation (2P) imaging, on the other hand, has not yet been convincingly applied to GEVI experiments. Here we show that 2P imaging of VSFP Butterfly 1.2 expresssing pyramidal neurons in layer 2/3 reports optical membrane voltage in brain slices consistent with 1P imaging but with a 2–3 larger ΔR/R value. 2P imaging of mouse cortex in-vivo achieved cellular resolution throughout layer 2/3. In somatosensory cortex we recorded sensory responses to single whisker deflections in anesthetized mice at full frame video rate. Our results demonstrate the feasibility of GEVI-based functional 2P imaging in mouse cortex. PMID:23868559
Virtual performer: single camera 3D measuring system for interaction in virtual space
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-10-01
The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.
Wang, Dingzhong; Tang, Wei; Wu, Xiaojie; Wang, Xinyi; Chen, Gengjia; Chen, Qiang; Li, Na; Liu, Feng
2012-08-21
Toehold-mediated strand displacement reaction (SDR) is first introduced to develop a simple quartz crystal microbalance (QCM) biosensor without an enzyme or label at normal temperature for highly selective and sensitive detection of single-nucleotide polymorphism (SNP) in the p53 tumor suppressor gene. A hairpin capture probe with an external toehold is designed and immobilized on the gold electrode surface of QCM. A successive SDR is initiated by the target sequence hybridization with the toehold domain and ends with the unfolding of the capture probe. Finally, the open-loop capture probe hybridizes with the streptavidin-coupled reporter probe as an efficient mass amplifier to enhance the QCM signal. The proposed biosensor displays remarkable specificity to target the p53 gene fragment against single-base mutant sequences (e.g., the largest discrimination factor is 63 to C-C mismatch) and high sensitivity with the detection limit of 0.3 nM at 20 °C. As the crucial component of the fabricated biosensor for providing the high discrimination capability, the design rationale of the capture probe is further verified by fluorescence sensing and atomic force microscopy imaging. Additionally, a recovery of 84.1% is obtained when detecting the target sequence in spiked HeLa cells lysate, demonstrating the feasibility of employing this biosensor in detecting SNPs in biological samples.
Demonstration of a single-wavelength spectral-imaging-based Thai jasmine rice identification
NASA Astrophysics Data System (ADS)
Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan
2011-07-01
A single-wavelength spectral-imaging-based Thai jasmine rice breed identification is demonstrated. Our nondestructive identification approach relies on a combination of fluorescent imaging and simple image processing techniques. Especially, we apply simple image thresholding, blob filtering, and image subtracting processes to either a 545 or a 575nm image in order to identify our desired Thai jasmine rice breed from others. Other key advantages include no waste product and fast identification time. In our demonstration, UVC light is used as our exciting light, a liquid crystal tunable optical filter is used as our wavelength seclector, and a digital camera with 640activepixels×480activepixels is used to capture the desired spectral image. Eight Thai rice breeds having similar size and shape are tested. Our experimental proof of concept shows that by suitably applying image thresholding, blob filtering, and image subtracting processes to the selected fluorescent image, the Thai jasmine rice breed can be identified with measured false acceptance rates of <22.9% and <25.7% for spectral images at 545 and 575nm wavelengths, respectively. A measured fast identification time is 25ms, showing high potential for real-time applications.
NASA Astrophysics Data System (ADS)
Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.
2016-03-01
Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.
Orthoscopic real-image display of digital holograms.
Makowski, P L; Kozacki, T; Zaperty, W
2017-10-01
We present a practical solution for the long-standing problem of depth inversion in real-image holographic display of digital holograms. It relies on a field lens inserted in front of the spatial light modulator device addressed by a properly processed hologram. The processing algorithm accounts for pixel size and wavelength mismatch between capture and display devices in a way that prevents image deformation. Complete images of large dimensions are observable from one position with a naked eye. We demonstrate the method experimentally on a 10-cm-long 3D object using a single full-HD spatial light modulator, but it can supplement most holographic displays designed to form a real image, including circular wide angle configurations.
High-resolution, high-throughput imaging with a multibeam scanning electron microscope
EBERLE, AL; MIKULA, S; SCHALEK, R; LICHTMAN, J; TATE, ML KNOTHE; ZEIDLER, D
2015-01-01
Electron–electron interactions and detector bandwidth limit the maximal imaging speed of single-beam scanning electron microscopes. We use multiple electron beams in a single column and detect secondary electrons in parallel to increase the imaging speed by close to two orders of magnitude and demonstrate imaging for a variety of samples ranging from biological brain tissue to semiconductor wafers. Lay Description The composition of our world and our bodies on the very small scale has always fascinated people, making them search for ways to make this visible to the human eye. Where light microscopes reach their resolution limit at a certain magnification, electron microscopes can go beyond. But their capability of visualizing extremely small features comes at the cost of a very small field of view. Some of the questions researchers seek to answer today deal with the ultrafine structure of brains, bones or computer chips. Capturing these objects with electron microscopes takes a lot of time – maybe even exceeding the time span of a human being – or new tools that do the job much faster. A new type of scanning electron microscope scans with 61 electron beams in parallel, acquiring 61 adjacent images of the sample at the same time a conventional scanning electron microscope captures one of these images. In principle, the multibeam scanning electron microscope’s field of view is 61 times larger and therefore coverage of the sample surface can be accomplished in less time. This enables researchers to think about large-scale projects, for example in the rather new field of connectomics. A very good introduction to imaging a brain at nanometre resolution can be found within course material from Harvard University on http://www.mcb80x.org/# as featured media entitled ‘connectomics’. PMID:25627873
Single Pixel Black Phosphorus Photodetector for Near-Infrared Imaging.
Miao, Jinshui; Song, Bo; Xu, Zhihao; Cai, Le; Zhang, Suoming; Dong, Lixin; Wang, Chuan
2018-01-01
Infrared imaging systems have wide range of military or civil applications and 2D nanomaterials have recently emerged as potential sensing materials that may outperform conventional ones such as HgCdTe, InGaAs, and InSb. As an example, 2D black phosphorus (BP) thin film has a thickness-dependent direct bandgap with low shot noise and noncryogenic operation for visible to mid-infrared photodetection. In this paper, the use of a single-pixel photodetector made with few-layer BP thin film for near-infrared imaging applications is demonstrated. The imaging is achieved by combining the photodetector with a digital micromirror device to encode and subsequently reconstruct the image based on compressive sensing algorithm. Stationary images of a near-infrared laser spot (λ = 830 nm) with up to 64 × 64 pixels are captured using this single-pixel BP camera with 2000 times of measurements, which is only half of the total number of pixels. The imaging platform demonstrated in this work circumvents the grand challenges of scalable BP material growth for photodetector array fabrication and shows the efficacy of utilizing the outstanding performance of BP photodetector for future high-speed infrared camera applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Combined optical coherence tomography and hyper-spectral imaging using a double clad fiber coupler
NASA Astrophysics Data System (ADS)
Guay-Lord, Robin; Lurie, Kristen L.; Attendu, Xavier; Mageau, Lucas; Godbout, Nicolas; Ellerbee Bowden, Audrey K.; Strupler, Mathias; Boudoux, Caroline
2016-03-01
This proceedings shows the combination of Optical Coherence Tomography (OCT) and Hyper-Spectral Imaging (HSI) using a double-clad optical fiber. The single mode core of the fiber is used to transmit OCT signals, while the cladding, with its large collection area, provides an efficient way to capture the reflectance spectrum of the sample. The combination of both methods enables three-dimensional acquisition of sample morphology with OCT, enhanced by the molecular information contained in its hyper-spectral image. We believe that the combination of these techniques could result in endoscopes with enhanced tissue identification capability.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
Calibration for single multi-mode fiber digital scanning microscopy imaging system
NASA Astrophysics Data System (ADS)
Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong
2015-11-01
Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.
NASA Astrophysics Data System (ADS)
Rios, Laura
A chemical reaction is fundamentally initiated by the restructuring of a chemical bond. Chemical reactions occur so quickly that their exact trajectory is unknown. To unlock the secret, first one would seek to know the inner working of a single molecule, and therein, a single chemical bond. However, the task is no small feat. Single molecule studies require exquisite spatial resolution afforded by relatively new technologies, and ultrafast laser techniques. The overarching theme of my dissertation will be the path towards achieving the space-time limit in chemistry: namely, the ability to record the structural changes of individual molecules during a reaction, one event at a time. A scanning tunneling microscope (STM) is used to image the molecules and manipulate their electronic environments. STM has the capacity to create topographical images of molecules with Angstrom ($10-10 m - the size of an atom) resolution, and can also probe the molecule electronically by use of a tunneling current (It). STM images reflect the changes in the potential energy surface (PES), and help us understand how molecules interact with surfaces and each other, thereby accessing the fundamental problem of catalysis and chemical reactions. In addition to seeing the molecule, we use Raman spectroscopy to track its molecular changes with chemical specificity. I combine these experimental tools to investigate tip-enhanced Raman spectra (TERS) of single molecules within the confines of a STM. These methods were used to report the conformational change of a single azobenzene-thiol derivative molecule. Although we were able to definitely isolate a single molecule signature, imaging the single molecule in real space and time proved elusive. Additionally, I report on a conductance switch based on the observable change of the topographic STM images of a radical anion mediated by the spin flip of a single electron on a single molecule. This is effectively the smallest achievable architecture of molecular electronics, negating the need for heat dissipation in small systems. A related work found how physisorption potentials of molecules to metals could be experimentally visually verified and modeled by STM, thus allowing us to use the STM tip as a driver for molecular motion on surfaces. Throughtout this work, we noted that a dominant feature of single molecule chemistry are intensity and spectral fluctuations that are difficult to characterize, as the molecule contorts wildly when it experiences distinct and powerful electromagnetic fields and field gradients. This much is evident in the last experiment, and chapter, of this thesis. Raman spectra associated with cobalt (II) tetraphenyl porphyrin (CoTPP) axially coordinated with bipyridyl ethylene (BPE) were captured with Raman mapping with nanometer resolution. However, the stochastic apperance of Raman lines and low resolution images made it difficult to ascertain which molecule we captured. The preliminary results as well as follow-up control experiments are discussed. While each experiment constitutes in and of itself an important, individual contribution, their sum establishes the principles of seeing single-molecule chemistry.
A 176×144 148dB adaptive tone-mapping imager
NASA Astrophysics Data System (ADS)
Vargas-Sierra, S.; Liñán-Cembrano, G.; Rodríguez-Vázquez, A.
2012-03-01
This paper presents a 176x144 (QCIF) HDR image sensor where visual information is simultaneously captured and adaptively compressed by means of an in-pixel tone mapping scheme. The tone mapping curve (TMC) is calculated from the histogram of a Time Stamp image captured in the previous frame, which serves as a probability indicator of the distribution of illuminations within the present frame. The chip produces 7-bit/pixel images that can map illuminations from 311μlux to 55.3 klux in a single frame in a way that each pixel decides when to stop observing photocurrent integration -with extreme values captured at 8s and 2.34μs respectively. Pixels size is 33x33μm2, which includes a 3x3μm2 Nwell- Psubstrate photodiode and an autozeroing technique for establishing the reset voltage, which cancels most of the offset contributions created by the analog processing circuitry. Dark signal (10.8 mV/s ) effects in the final image are attenuated by an automatic programming of the DAC top voltage. Measured characteristics are Sensitivity 5.79 V/lux.s , FWC 12.2ke-, Conversion Factor 129(e-/DN), and Read Noise 25e-. The chip has been designed in the 0.35μm OPTO technology from Austriamicrosystems (AMS). Due to the focal plane operation, this architecture is especially well suited to be implemented in a 3D (vertical stacking) technology using per-pixel TSVs.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-11-17
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
Fusion of light-field and photogrammetric surface form data
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.
2017-08-01
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Single shot laser speckle based 3D acquisition system for medical applications
NASA Astrophysics Data System (ADS)
Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young
2018-06-01
The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.
Least squares restoration of multichannel images
NASA Technical Reports Server (NTRS)
Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.
1991-01-01
Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.
Witmer, Matthew T; Parlitsis, George; Patel, Sarju; Kiss, Szilárd
2013-01-01
Purpose To compare ultra-widefield fluorescein angiography imaging using the Optos® Optomap® and the Heidelberg Spectralis® noncontact ultra-widefield module. Methods Five patients (ten eyes) underwent ultra-widefield fluorescein angiography using the Optos® panoramic P200Tx imaging system and the noncontact ultra-widefield module in the Heidelberg Spectralis® HRA+OCT system. The images were obtained as a single, nonsteered shot centered on the macula. The area of imaged retina was outlined and quantified using Adobe® Photoshop® C5 software. The total area and area within each of four visualized quadrants was calculated and compared between the two imaging modalities. Three masked reviewers also evaluated each quadrant per eye (40 total quadrants) to determine which modality imaged the retinal vasculature most peripherally. Results Optos® imaging captured a total retinal area averaging 151,362 pixels, ranging from 116,998 to 205,833 pixels, while the area captured using the Heidelberg Spectralis® was 101,786 pixels, ranging from 73,424 to 116,319 (P = 0.0002). The average area per individual quadrant imaged by Optos® versus the Heidelberg Spectralis® superiorly was 32,373 vs 32,789 pixels, respectively (P = 0.91), inferiorly was 24,665 vs 26,117 pixels, respectively (P = 0.71), temporally was 47,948 vs 20,645 pixels, respectively (P = 0.0001), and nasally was 46,374 vs 22,234 pixels, respectively (P = 0.0001). The Heidelberg Spectralis® was able to image the superior and inferior retinal vasculature to a more distal point than was the Optos®, in nine of ten eyes (18 of 20 quadrants). The Optos® was able to image the nasal and temporal retinal vasculature to a more distal point than was the Heidelberg Spectralis®, in ten of ten eyes (20 of 20 quadrants). Conclusion The ultra-widefield fluorescein angiography obtained with the Optos® and Heidelberg Spectralis® ultra-widefield imaging systems are both excellent modalities that provide views of the peripheral retina. On a single nonsteered image, the Optos® Optomap® covered a significantly larger total retinal surface area, with greater image variability, than did the Heidelberg Spectralis® ultra-widefield module. The Optos® captured an appreciably wider view of the retina temporally and nasally, albeit with peripheral distortion, while the ultra-widefield Heidelberg Spectralis® module was able to image the superior and inferior retinal vasculature more peripherally. The clinical significance of these findings as well as the area imaged on steered montaged images remains to be determined. PMID:23458976
NASA Astrophysics Data System (ADS)
Welch, Kyle; Kumar, Santosh; Hong, Jiarong; Cheng, Xiang
2017-11-01
Understanding the 3D flow induced by microswimmers is paramount to revealing how they interact with each other and their environment. While many studies have measured 2D projections of flow fields around single microorganisms, reliable 3D measurement remains elusive due to the difficulty in imaging fast 3D fluid flows at submicron spatial and millisecond temporal scales. Here, we present a precision measurement of the 3D flow field induced by motile planktonic algae cells, Chlamydomonas reinhardtii. We manually capture and hold stationary a single alga using a micropipette, while still allowing it to beat its flagella in the breastroke pattern characteristic to C. reinhardtii. The 3D flow field around the alga is then tracked by employing fast holographic imaging on 1 um tracer particles, which leads to a spatial resolution of 100 nm along the optical axis and 40 nm in the imaging plane normal to the optical axis. We image the flow around a single alga continuously through thousands of flagellar beat cycles and aggregate that data into a complete 3D flow field. Our study demonstrates the power of holography in imaging fast complex microscopic flow structures and provides crucial information for understanding the detailed locomotion of swimming microorganisms.
A quantitative damage imaging technique based on enhanced CCRTM for composite plates using 2D scan
NASA Astrophysics Data System (ADS)
He, Jiaze; Yuan, Fuh-Gwo
2016-10-01
A two-dimensional (2D) non-contact areal scan system was developed to image and quantify impact damage in a composite plate using an enhanced zero-lag cross-correlation reverse-time migration (E-CCRTM) technique. The system comprises a single piezoelectric wafer mounted on the composite plate and a laser Doppler vibrometer (LDV) for scanning a region in the vicinity of the PZT to capture the scattered wavefield. The proposed damage imaging technique takes into account the amplitude, phase, geometric spreading, and all of the frequency content of the Lamb waves propagating in the plate; thus, a reflectivity coefficients of the delamination is calculated and potentially related to damage severity. Comparisons are made in terms of damage imaging quality between 2D areal scans and 1D line scans as well as between the proposed and existing imaging conditions. The experimental results show that the 2D E-CCRTM performs robustly when imaging and quantifying impact damage in large-scale composites using a single PZT actuator with a nearby areal scan using LDV.
An enhanced CCRTM (E-CCRTM) damage imaging technique using a 2D areal scan for composite plates
NASA Astrophysics Data System (ADS)
He, Jiaze; Yuan, Fuh-Gwo
2016-04-01
A two-dimensional (2-D) non-contact areal scan system was developed to image and quantify impact damage in a composite plate using an enhanced zero-lag cross-correlation reverse-time migration (E-CCRTM) technique. The system comprises a single piezoelectric actuator mounted on the composite plate and a laser Doppler vibrometer (LDV) for scanning a region to capture the scattered wavefield in the vicinity of the PZT. The proposed damage imaging technique takes into account the amplitude, phase, geometric spreading, and all of the frequency content of the Lamb waves propagating in the plate; thus, the reflectivity coefficients of the delamination can be calculated and potentially related to damage severity. Comparisons are made in terms of damage imaging quality between 2-D areal scans and linear scans as well as between the proposed and existing imaging conditions. The experimental results show that the 2-D E-CCRTM performs robustly when imaging and quantifying impact damage in large-scale composites using a single PZT actuator with a nearby areal scan using LDV.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
Image analysis driven single-cell analytics for systems microbiology.
Balomenos, Athanasios D; Tsakanikas, Panagiotis; Aspridou, Zafiro; Tampakaki, Anastasia P; Koutsoumanis, Konstantinos P; Manolakos, Elias S
2017-04-04
Time-lapse microscopy is an essential tool for capturing and correlating bacterial morphology and gene expression dynamics at single-cell resolution. However state-of-the-art computational methods are limited in terms of the complexity of cell movies that they can analyze and lack of automation. The proposed Bacterial image analysis driven Single Cell Analytics (BaSCA) computational pipeline addresses these limitations thus enabling high throughput systems microbiology. BaSCA can segment and track multiple bacterial colonies and single-cells, as they grow and divide over time (cell segmentation and lineage tree construction) to give rise to dense communities with thousands of interacting cells in the field of view. It combines advanced image processing and machine learning methods to deliver very accurate bacterial cell segmentation and tracking (F-measure over 95%) even when processing images of imperfect quality with several overcrowded colonies in the field of view. In addition, BaSCA extracts on the fly a plethora of single-cell properties, which get organized into a database summarizing the analysis of the cell movie. We present alternative ways to analyze and visually explore the spatiotemporal evolution of single-cell properties in order to understand trends and epigenetic effects across cell generations. The robustness of BaSCA is demonstrated across different imaging modalities and microscopy types. BaSCA can be used to analyze accurately and efficiently cell movies both at a high resolution (single-cell level) and at a large scale (communities with many dense colonies) as needed to shed light on e.g. how bacterial community effects and epigenetic information transfer play a role on important phenomena for human health, such as biofilm formation, persisters' emergence etc. Moreover, it enables studying the role of single-cell stochasticity without losing sight of community effects that may drive it.
NASA Astrophysics Data System (ADS)
Kim, Moon Sung; Lee, Kangjin; Chao, Kaunglin; Lefcourt, Alan; Cho, Byung-Kwan; Jun, Won
We developed a push-broom, line-scan imaging system capable of simultaneous measurements of reflectance and fluorescence. The system allows multitasking inspections for quality and safety attributes of apples due to its dynamic capabilities in simultaneously capturing fluorescence and reflectance, and selectivity in multispectral bands. A multitasking image-based inspection system for online applications has been suggested in that a single imaging device that could perform a multitude of both safety and quality inspection needs. The presented multitask inspection approach in online applications may provide an economically viable means for a number of food processing industries being able to adapt to operate and meet the dynamic and specific inspection and sorting needs.
Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J
2012-01-01
Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow.The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow.
Novel snapshot hyperspectral imager for fluorescence imaging
NASA Astrophysics Data System (ADS)
Chandler, Lynn; Chandler, Andrea; Periasamy, Ammasi
2018-02-01
Hyperspectral imaging has emerged as a new technique for the identification and classification of biological tissue1. Benefitting recent developments in sensor technology, the new class of hyperspectral imagers can capture entire hypercubes with single shot operation and it shows great potential for real-time imaging in biomedical sciences. This paper explores the use of a SnapShot imager in fluorescence imaging via microscope for the very first time. Utilizing the latest imaging sensor, the Snapshot imager is both compact and attachable via C-mount to any commercially available light microscope. Using this setup, fluorescence hypercubes of several cells were generated, containing both spatial and spectral information. The fluorescence images were acquired with one shot operation for all the emission range from visible to near infrared (VIS-IR). The paper will present the hypercubes obtained images from example tissues (475-630nm). This study demonstrates the potential of application in cell biology or biomedical applications for real time monitoring.
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Video capture virtual reality as a flexible and effective rehabilitation tool
Weiss, Patrice L; Rand, Debbie; Katz, Noomi; Kizony, Rachel
2004-01-01
Video capture virtual reality (VR) uses a video camera and software to track movement in a single plane without the need to place markers on specific bodily locations. The user's image is thereby embedded within a simulated environment such that it is possible to interact with animated graphics in a completely natural manner. Although this technology first became available more than 25 years ago, it is only within the past five years that it has been applied in rehabilitation. The objective of this article is to describe the way this technology works, to review its assets relative to other VR platforms, and to provide an overview of some of the major studies that have evaluated the use of video capture technologies for rehabilitation. PMID:15679949
Programmable Real-time Clinical Photoacoustic and Ultrasound Imaging System
Kim, Jeesu; Park, Sara; Jung, Yuhan; Chang, Sunyeob; Park, Jinyong; Zhang, Yumiao; Lovell, Jonathan F.; Kim, Chulhong
2016-01-01
Photoacoustic imaging has attracted interest for its capacity to capture functional spectral information with high spatial and temporal resolution in biological tissues. Several photoacoustic imaging systems have been commercialized recently, but they are variously limited by non-clinically relevant designs, immobility, single anatomical utility (e.g., breast only), or non-programmable interfaces. Here, we present a real-time clinical photoacoustic and ultrasound imaging system which consists of an FDA-approved clinical ultrasound system integrated with a portable laser. The system is completely programmable, has an intuitive user interface, and can be adapted for different applications by switching handheld imaging probes with various transducer types. The customizable photoacoustic and ultrasound imaging system is intended to meet the diverse needs of medical researchers performing both clinical and preclinical photoacoustic studies. PMID:27731357
Programmable Real-time Clinical Photoacoustic and Ultrasound Imaging System.
Kim, Jeesu; Park, Sara; Jung, Yuhan; Chang, Sunyeob; Park, Jinyong; Zhang, Yumiao; Lovell, Jonathan F; Kim, Chulhong
2016-10-12
Photoacoustic imaging has attracted interest for its capacity to capture functional spectral information with high spatial and temporal resolution in biological tissues. Several photoacoustic imaging systems have been commercialized recently, but they are variously limited by non-clinically relevant designs, immobility, single anatomical utility (e.g., breast only), or non-programmable interfaces. Here, we present a real-time clinical photoacoustic and ultrasound imaging system which consists of an FDA-approved clinical ultrasound system integrated with a portable laser. The system is completely programmable, has an intuitive user interface, and can be adapted for different applications by switching handheld imaging probes with various transducer types. The customizable photoacoustic and ultrasound imaging system is intended to meet the diverse needs of medical researchers performing both clinical and preclinical photoacoustic studies.
Automated camera-phone experience with the frequency of imaging necessary to capture diet.
Arab, Lenore; Winter, Ashley
2010-08-01
Camera-enabled cell phones provide an opportunity to strengthen dietary recall through automated imaging of foods eaten during a specified period. To explore the frequency of imaging needed to capture all foods eaten, we examined the number of images of individual foods consumed in a pilot study of automated imaging using camera phones set to an image-capture frequency of one snapshot every 10 seconds. Food images were tallied from 10 young adult subjects who wore the phone continuously during the work day and consented to share their images. Based on the number of images received for each eating experience, the pilot data suggest that automated capturing of images at a frequency of once every 10 seconds is adequate for recording foods consumed during regular meals, whereas a greater frequency of imaging is necessary to capture snacks and beverages eaten quickly. 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. But, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. We built this method on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. Particularly, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
Bright Lu2O3:Eu thin-film scintillators for high-resolution radioluminescence microscopy
Sengupta, Debanti; Miller, Stuart; Marton, Zsolt; Chin, Frederick; Nagarkar, Vivek
2015-01-01
We investigate the performance of a new thin-film Lu2O3:Eu scintillator for single-cell radionuclide imaging. Imaging the metabolic properties of heterogeneous cell populations in real time is an important challenge with clinical implications. We have developed an innovative technique called radioluminescence microscopy, to quantitatively and sensitively measure radionuclide uptake in single cells. The most important component of this technique is the scintillator, which converts the energy released during radioactive decay into luminescent signals. The sensitivity and spatial resolution of the imaging system depend critically on the characteristics of the scintillator, i.e. the material used and its geometrical configuration. Scintillators fabricated using conventional methods are relatively thick, and therefore do not provide optimal spatial resolution. We compare a thin-film Lu2O3:Eu scintillator to a conventional 500 μm thick CdWO4 scintillator for radioluminescence imaging. Despite its thinness, the unique scintillation properties of the Lu2O3:Eu scintillator allow us to capture single positron decays with over fourfold higher sensitivity, a significant achievement. The thin-film Lu2O3:Eu scintillators also yield radioluminescence images where individual cells appear smaller and better resolved on average than with the CdWO4 scintillators. Coupled with the thin-film scintillator technology, radioluminescence microscopy can yield valuable and clinically relevant data on the metabolism of single cells. PMID:26183115
ERIC Educational Resources Information Center
Livitz, Gennady
2011-01-01
Color is a complex and rich perceptual phenomenon that relates physical properties of light to certain perceptual qualia associated with vision. Hering's opponent color theory, widely regarded as capturing the most fundamental aspects of color phenomenology, suggests that certain unique hues are mutually exclusive as components of a single color.…
Land-markings: 12 Journeys through 9/11 Living Memorials [DVD
Erika S. Svendsen; Lindsay K. Campbell; Phu Duong
2007-01-01
The Land-markings DVD was created from a multimedia exhibition of 12 digitally authored journeys through more than 700 living memorials nationwide. Land-markings captures stories and images of how we use the landscape as a way to remember people, places, and events. Ranging from single tree plantings, to the creation of new parks, to the restoration of existing forests...
NASA Astrophysics Data System (ADS)
Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji
2012-03-01
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.
A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.
Qian, Shuo; Sheng, Yang
2011-11-01
Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.
NASA Astrophysics Data System (ADS)
Chatterjee, Amit; Bhatia, Vimal; Prakash, Shashi
2017-08-01
Fingerprint is a unique, un-alterable and easily collected biometric of a human being. Although it is a 3D biological characteristic, traditional methods are designed to provide only a 2D image. This touch based mapping of 3D shape to 2D image losses information and leads to nonlinear distortions. Moreover, as only topographic details are captured, conventional systems are potentially vulnerable to spoofing materials (e.g. artificial fingers, dead fingers, false prints, etc.). In this work, we demonstrate an anti-spoof touchless 3D fingerprint detection system using a combination of single shot fringe projection and biospeckle analysis. For fingerprint detection using fringe projection, light from a low power LED source illuminates a finger through a sinusoidal grating. The fringe pattern modulated because of features on the fingertip is captured using a CCD camera. Fourier transform method based frequency filtering is used for the reconstruction of 3D fingerprint from the captured fringe pattern. In the next step, for spoof detection using biospeckle analysis a visuo-numeric algorithm based on modified structural function and non-normalized histogram is proposed. High activity biospeckle patterns are generated because of interaction of collimated laser light with internal fluid flow of the real finger sample. This activity reduces abruptly in case of layered fake prints, and is almost absent in dead or fake fingers. Furthermore, the proposed setup is fast, low-cost, involves non-mechanical scanning and is highly stable.
Daaboul, George G; Lopez, Carlos A; Chinnala, Jyothsna; Goldberg, Bennett B; Connor, John H; Ünlü, M Selim
2014-06-24
Rapid, sensitive, and direct label-free capture and characterization of nanoparticles from complex media such as blood or serum will broadly impact medicine and the life sciences. We demonstrate identification of virus particles in complex samples for replication-competent wild-type vesicular stomatitis virus (VSV), defective VSV, and Ebola- and Marburg-pseudotyped VSV with high sensitivity and specificity. Size discrimination of the imaged nanoparticles (virions) allows differentiation between modified viruses having different genome lengths and facilitates a reduction in the counting of nonspecifically bound particles to achieve a limit-of-detection (LOD) of 5 × 10(3) pfu/mL for the Ebola and Marburg VSV pseudotypes. We demonstrate the simultaneous detection of multiple viruses in a single sample (composed of serum or whole blood) for screening applications and uncompromised detection capabilities in samples contaminated with high levels of bacteria. By employing affinity-based capture, size discrimination, and a "digital" detection scheme to count single virus particles, we show that a robust and sensitive virus/nanoparticle sensing assay can be established for targets in complex samples. The nanoparticle microscopy system is termed the Single Particle Interferometric Reflectance Imaging Sensor (SP-IRIS) and is capable of high-throughput and rapid sizing of large numbers of biological nanoparticles on an antibody microarray for research and diagnostic applications.
Direct single-shot phase retrieval from the diffraction pattern of separated objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leshem, Ben; Xu, Rui; Dallal, Yehonatan
The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less
Direct single-shot phase retrieval from the diffraction pattern of separated objects
Leshem, Ben; Xu, Rui; Dallal, Yehonatan; ...
2016-02-22
The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Cameras and settings for optimal image capture from UAVs
NASA Astrophysics Data System (ADS)
Smith, Mike; O'Connor, James; James, Mike R.
2017-04-01
Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.
System for objective assessment of image differences in digital cinema
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Krasula, Lukáš; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2014-09-01
There is high demand for quick digitization and subsequent image restoration of archived film records. Digitization is very urgent in many cases because various invaluable pieces of cultural heritage are stored on aging media. Only selected records can be reconstructed perfectly using painstaking manual or semi-automatic procedures. This paper aims to answer the question what are the quality requirements on the restoration process in order to obtain acceptably close visual perception of the digitally restored film in comparison to the original analog film copy. This knowledge is very important to preserve the original artistic intention of the movie producers. Subjective experiment with artificially distorted images has been conducted in order to answer the question what is the visual impact of common image distortions in digital cinema. Typical color and contrast distortions were introduced and test images were presented to viewers using digital projector. Based on the outcome of this subjective evaluation a system for objective assessment of image distortions has been developed and its performance tested. The system utilizes calibrated digital single-lens reflex camera and subsequent analysis of suitable features of images captured from the projection screen. The evaluation of captured image data has been optimized in order to obtain predicted differences between the reference and distorted images while achieving high correlation with the results of subjective assessment. The system can be used to objectively determine the difference between analog film and digital cinema images on the projection screen.
Optical aberration correction for simple lenses via sparse representation
NASA Astrophysics Data System (ADS)
Cui, Jinlin; Huang, Wei
2018-04-01
Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.
Integrated telemedicine workstation for intercontinental grand rounds
NASA Astrophysics Data System (ADS)
Willis, Charles E.; Leckie, Robert G.; Brink, Linda; Goeringer, Fred
1995-04-01
The Telemedicine Spacebridge to Moscow was a series of intercontinental sessions sponsored jointly by NASA and the Moscow Academy of Medicine. To improve the quality of medical images presented, the MDIS Project developed a workstation for acquisition, storage, and interactive display of radiology and pathology images. The workstation was based on a Macintosh IIfx platform with a laser digitizer for radiographs and video capture capability for microscope images. Images were transmitted via the Russian Lyoutch Satellite which had only a single video channel available and no high speed data channels. Two workstations were configured -- one for use at the Uniformed Services University of Health Sciences in Bethesda, MD. and the other for use at the Hospital of the Interior in Moscow, Russia. The two workstations were used may times during 16 sessions. As clinicians used the systems, we modified the original configuration to improve interactive use. This project demonstrated that numerous acquisition and output devices could be brought together in a single interactive workstation. The video images were satisfactory for remote consultation in a grand rounds format.
A programmable light engine for quantitative single molecule TIRF and HILO imaging.
van 't Hoff, Marcel; de Sars, Vincent; Oheim, Martin
2008-10-27
We report on a simple yet powerful implementation of objective-type total internal reflection fluorescence (TIRF) and highly inclined and laminated optical sheet (HILO, a type of dark-field) illumination. Instead of focusing the illuminating laser beam to a single spot close to the edge of the microscope objective, we are scanning during the acquisition of a fluorescence image the focused spot in a circular orbit, thereby illuminating the sample from various directions. We measure parameters relevant for quantitative image analysis during fluorescence image acquisition by capturing an image of the excitation light distribution in an equivalent objective backfocal plane (BFP). Operating at scan rates above 1 MHz, our programmable light engine allows directional averaging by circular spinning the spot even for sub-millisecond exposure times. We show that restoring the symmetry of TIRF/HILO illumination reduces scattering and produces an evenly lit field-of-view that affords on-line analysis of evanescnt-field excited fluorescence without pre-processing. Utilizing crossed acousto-optical deflectors, our device generates arbitrary intensity profiles in BFP, permitting variable-angle, multi-color illumination, or objective lenses to be rapidly exchanged.
Fluorescence lifetime imaging microscopy using near-infrared contrast agents.
Nothdurft, R; Sarder, P; Bloch, S; Culver, J; Achilefu, S
2012-08-01
Although single-photon fluorescence lifetime imaging microscopy (FLIM) is widely used to image molecular processes using a wide range of excitation wavelengths, the captured emission of this technique is confined to the visible spectrum. Here, we explore the feasibility of utilizing near-infrared (NIR) fluorescent molecular probes with emission >700 nm for FLIM of live cells. The confocal microscope is equipped with a 785 nm laser diode, a red-enhanced photomultiplier tube, and a time-correlated single photon counting card. We demonstrate that our system reports the lifetime distributions of NIR fluorescent dyes, cypate and DTTCI, in cells. In cells labelled separately or jointly with these dyes, NIR FLIM successfully distinguishes their lifetimes, providing a method to sort different cell populations. In addition, lifetime distributions of cells co-incubated with these dyes allow estimate of the dyes' relative concentrations in complex cellular microenvironments. With the heightened interest in fluorescence lifetime-based small animal imaging using NIR fluorophores, this technique further serves as a bridge between in vitro spectroscopic characterization of new fluorophore lifetimes and in vivo tissue imaging. © 2012 The Author Journal of Microscopy © 2012 Royal Microscopical Society.
Fluorescence Lifetime Imaging Microscopy Using Near-Infrared Contrast Agents
Nothdurft, Ralph; Sarder, Pinaki; Bloch, Sharon; Culver, Joseph; Achilefu, Samuel
2013-01-01
Although single-photon fluorescence lifetime imaging microscopy (FLIM) is widely used to image molecular processes using a wide range of excitation wavelengths, the captured emission of this technique is confined to the visible spectrum. Here, we explore the feasibility of utilizing near-infrared (NIR) fluorescent molecular probes with emission >700 nm for FLIM of live cells. The confocal microscope is equipped with a 785 nm laser diode, a red-enhanced photomultiplier tube, and a time-correlated single photon counting card. We demonstrate that our system reports the lifetime distributions of NIR fluorescent dyes, cypate and DTTCI, in cells. In cells labeled separately or jointly with these dyes, NIR FLIM successfully distinguishes their lifetimes, providing a method to sort different cell populations. In addition, lifetime distributions of cells co-incubated with these dyes allow estimate of the dyes’ relative concentrations in complex cellular microenvironments. With the heightened interest in fluorescence lifetime-based small animal imaging using NIR fluorophores, this technique further serves as a bridge between in vitro spectroscopic characterization of new fluorophore lifetimes and in vivo tissue imaging. PMID:22788550
Cassini "Noodle" Mosaic of Saturn
2017-07-24
This mosaic of images combines views captured by NASA's Cassini spacecraft as it made the first dive of the mission's Grand Finale on April 26, 2017. It shows a vast swath of Saturn's atmosphere, from the north polar vortex to the boundary of the hexagon-shaped jet stream, to details in bands and swirls at middle latitudes and beyond. The mosaic is a composite of 137 images captured as Cassini made its first dive toward the gap between Saturn and its rings. It is an update to a previously released image product. In the earlier version, the images were presented as individual movie frames, whereas here, they have been combined into a single, continuous mosaic. The mosaic is presented as a still image as well as a video that pans across its length. Imaging scientists referred to this long, narrow mosaic as a "noodle" in planning the image sequence. The first frame of the mosaic is centered on Saturn's north pole, and the last frame is centered on a region at 18 degrees north latitude. During the dive, the spacecraft's altitude above the clouds changed from 45,000 to 3,200 miles (72,400 to 8374 kilometers), while the image scale changed from 5.4 miles (8.7 kilometers) per pixel to 0.6 mile (1 kilometer) per pixel. The bottom of the mosaic (near the end of the movie) has a curved shape. This is where the spacecraft rotated to point its high-gain antenna in the direction of motion as a protective measure before crossing Saturn's ring plane. The images in this sequence were captured in visible light using the Cassini spacecraft wide-angle camera. The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels. The small image size was chosen in order to allow the camera to take images quickly as Cassini sped over Saturn. These images of the planet's curved surface were projected onto a flat plane before being combined into a mosaic. Each image was mapped in stereographic projection centered at 55 degree north latitude. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21617
A smart core-sheath nanofiber that captures and releases red blood cells from the blood
NASA Astrophysics Data System (ADS)
Shi, Q.; Hou, J.; Zhao, C.; Xin, Z.; Jin, J.; Li, C.; Wong, S.-C.; Yin, J.
2016-01-01
A smart core-sheath nanofiber for non-adherent cell capture and release is demonstrated. The nanofibers are fabricated by single-spinneret electrospinning of poly(N-isopropylacrylamide) (PNIPAAm), polycaprolactone (PCL) and nattokinase (NK) solution blends. The self-assembly of PNIPAAm and PCL blends during the electrospinning generates the core-sheath PCL/PNIPAAm nanofibers with PNIPAAm as the sheath. The PNIPAAm-based core-sheath nanofibers are switchable between hydrophobicity and hydrophilicity with temperature change and enhance stability in the blood. When the nanofibers come in contact with blood, the NK is released from the nanofibers to resist platelet adhesion on the nanofiber surface, facilitating the direct capture and isolation of red blood cells (RBCs) from the blood above phase-transition temperature of PNIPAAm. Meanwhile, the captured RBCs are readily released from the nanofibers with temperature stimuli in an undamaged manner. The release efficiency of up to 100% is obtained while maintaining cellular integrity and function. This work presents promising nanofibers to effectively capture non-adherent cells and release for subsequent molecular analysis and diagnosis of single cells.A smart core-sheath nanofiber for non-adherent cell capture and release is demonstrated. The nanofibers are fabricated by single-spinneret electrospinning of poly(N-isopropylacrylamide) (PNIPAAm), polycaprolactone (PCL) and nattokinase (NK) solution blends. The self-assembly of PNIPAAm and PCL blends during the electrospinning generates the core-sheath PCL/PNIPAAm nanofibers with PNIPAAm as the sheath. The PNIPAAm-based core-sheath nanofibers are switchable between hydrophobicity and hydrophilicity with temperature change and enhance stability in the blood. When the nanofibers come in contact with blood, the NK is released from the nanofibers to resist platelet adhesion on the nanofiber surface, facilitating the direct capture and isolation of red blood cells (RBCs) from the blood above phase-transition temperature of PNIPAAm. Meanwhile, the captured RBCs are readily released from the nanofibers with temperature stimuli in an undamaged manner. The release efficiency of up to 100% is obtained while maintaining cellular integrity and function. This work presents promising nanofibers to effectively capture non-adherent cells and release for subsequent molecular analysis and diagnosis of single cells. Electronic supplementary information (ESI) available: Electrospinning of polymer nanofibers; FTIR spectra and XPS spectra of PCL, PNIPAAm and PCL/PNIPAAm nanofibers; SEM images of PCL/PNIPAAm nanofibers with varied composition; PNIPAAm content on the sheath of nanofibers; stability of core-sheath PCL/PNIPAAm nanofibers. Platelet adhesion on the PCL/PNIPAAm nanofibers in the presence of NK; Protein adsorption on nanofibers. See DOI: 10.1039/c5nr07070h
Full-color stereoscopic single-pixel camera based on DMD technology
NASA Astrophysics Data System (ADS)
Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús
2017-02-01
Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.
Bioinformatics approaches to single-cell analysis in developmental biology.
Yalcin, Dicle; Hakguder, Zeynep M; Otu, Hasan H
2016-03-01
Individual cells within the same population show various degrees of heterogeneity, which may be better handled with single-cell analysis to address biological and clinical questions. Single-cell analysis is especially important in developmental biology as subtle spatial and temporal differences in cells have significant associations with cell fate decisions during differentiation and with the description of a particular state of a cell exhibiting an aberrant phenotype. Biotechnological advances, especially in the area of microfluidics, have led to a robust, massively parallel and multi-dimensional capturing, sorting, and lysis of single-cells and amplification of related macromolecules, which have enabled the use of imaging and omics techniques on single cells. There have been improvements in computational single-cell image analysis in developmental biology regarding feature extraction, segmentation, image enhancement and machine learning, handling limitations of optical resolution to gain new perspectives from the raw microscopy images. Omics approaches, such as transcriptomics, genomics and epigenomics, targeting gene and small RNA expression, single nucleotide and structural variations and methylation and histone modifications, rely heavily on high-throughput sequencing technologies. Although there are well-established bioinformatics methods for analysis of sequence data, there are limited bioinformatics approaches which address experimental design, sample size considerations, amplification bias, normalization, differential expression, coverage, clustering and classification issues, specifically applied at the single-cell level. In this review, we summarize biological and technological advancements, discuss challenges faced in the aforementioned data acquisition and analysis issues and present future prospects for application of single-cell analyses to developmental biology. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Micro-optical system based 3D imaging for full HD depth image capturing
NASA Astrophysics Data System (ADS)
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
2012-03-01
20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halls, B. R.; Roy, S.; Gord, J. R.
Flash x-ray radiography is used to capture quantitative, two-dimensional line-of-sight averaged, single-shot liquid distribution measurements in impinging jet sprays. The accuracy of utilizing broadband x-ray radiation from compact flash tube sources is investigated for a range of conditions by comparing the data with radiographic high-speed measurements from a narrowband, high-intensity synchrotron x-ray facility at the Advanced Photon Source (APS) of Argonne National Laboratory. The path length of the liquid jets is varied to evaluate the effects of energy dependent x-ray attenuation, also known as spectral beam hardening. The spatial liquid distributions from flash x-ray and synchrotron-based radiography are compared, alongmore » with spectral characteristics using Taylor’s hypothesis. The results indicate that quantitative, single-shot imaging of liquid distributions can be achieved using broadband x-ray sources with nanosecond temporal resolution. Practical considerations for optimizing the imaging system performance are discussed, including the coupled effects of x-ray bandwidth, contrast, sensitivity, spatial resolution, temporal resolution, and spectral beam hardening.« less
Objective analysis of image quality of video image capture systems
NASA Astrophysics Data System (ADS)
Rowberg, Alan H.
1990-07-01
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Light field imaging and application analysis in THz
NASA Astrophysics Data System (ADS)
Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin
2018-01-01
The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
Optomechanical System Development of the AWARE Gigapixel Scale Camera
NASA Astrophysics Data System (ADS)
Son, Hui S.
Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-26
... Images, and Components Thereof; Receipt of Complaint; Solicitation of Comments Relating to the Public... Devices for Capturing and Transmitting Images, and Components Thereof, DN 2869; the Commission is... importation of certain electronic devices for capturing and transmitting images, and components thereof. The...
Mass Spectrometric Imaging Using Laser Ablation and Solvent Capture by Aspiration (LASCA)
NASA Astrophysics Data System (ADS)
Brauer, Jonathan I.; Beech, Iwona B.; Sunner, Jan
2015-09-01
A novel interface for ambient, laser ablation-based mass spectrometric imaging (MSI) referred to as laser ablation and solvent capture by aspiration (LASCA) is presented and its performance demonstrated using selected, unaltered biological materials. LASCA employs a pulsed 2.94 μm laser beam for specimen ablation. Ablated materials in the laser plumes are collected on a hanging solvent droplet with electric field-enhanced trapping, followed by aspiration of droplets and remaining plume material in the form of a coarse aerosol into a collection capillary. The gas and liquid phases are subsequently separated in a 10 μL-volume separatory funnel, and the solution is analyzed with electrospray ionization in a high mass resolution Q-ToF mass spectrometer. The LASCA system separates the sampling and ionization steps in MSI and combines high efficiencies of laser plume sampling and of electrospray ionization (ESI) with high mass resolution MS. Up to 2000 different compounds are detected from a single ablation spot (pixel). Using the LASCA platform, rapid (6 s per pixel), high sensitivity, high mass-resolution ambient imaging of "as-received" biological material is achieved routinely and reproducibly.
NASA Astrophysics Data System (ADS)
Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.
2008-03-01
An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.
Yi-Qun, Xu; Wei, Liu; Xin-Ye, Ni
2016-10-01
This study employs dual-source computed tomography single-spectrum imaging to evaluate the effects of contrast agent artifact removal and the computational accuracy of radiotherapy treatment planning improvement. The phantom, including the contrast agent, was used in all experiments. The amounts of iodine in the contrast agent were 30, 15, 7.5, and 0.75 g/100 mL. Two images with different energy values were scanned and captured using dual-source computed tomography (80 and 140 kV). To obtain a fused image, 2 groups of images were processed using single-energy spectrum imaging technology. The Pinnacle planning system was used to measure the computed tomography values of the contrast agent and the surrounding phantom tissue. The difference between radiotherapy treatment planning based on 80 kV, 140 kV, and energy spectrum image was analyzed. For the image with high iodine concentration, the quality of the energy spectrum-fused image was the highest, followed by that of the 140-kV image. That of the 80-kV image was the worst. The difference in the radiotherapy treatment results among the 3 models was significant. When the concentration of iodine was 30 g/100 mL and the distance from the contrast agent at the dose measurement point was 1 cm, the deviation values (P) were 5.95% and 2.20% when image treatment planning was based on 80 and 140 kV, respectively. When the concentration of iodine was 15 g/100 mL, deviation values (P) were -2.64% and -1.69%. Dual-source computed tomography single-energy spectral imaging technology can remove contrast agent artifacts to improve the calculated dose accuracy in radiotherapy treatment planning. © The Author(s) 2015.
Still-to-video face recognition in unconstrained environments
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing
2015-02-01
Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.
Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J
2012-01-01
Abstract Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow. The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow. PMID:22859881
Mishra, Pankaj; Li, Ruijiang; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.
2014-01-01
Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient. PMID:25086523
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg
2014-08-15
Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculatedmore » through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient.« less
NASA Astrophysics Data System (ADS)
Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian
2016-05-01
Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).
Comparative investigation on magnetic capture selectivity between single wires and a real matrix
NASA Astrophysics Data System (ADS)
Ren, Peng; Chen, Luzheng; Liu, Wenbo; Shao, Yanhai; Zeng, Jianwu
2018-03-01
High gradient magnetic separation (HGMS) achieves the effective separation to fine weakly magnetic minerals through a magnetic matrix. In practice, the matrix is made of numerous magnetic wires, so that an insight into the magnetic capture characteristics of single wires to magnetic minerals would provide a basic foundation for the optimum design and choice of real matrix. The magnetic capture selectivity of cylindrical and rectangular single wires in concentrating ilmenite minerals were investigated through a cyclic pulsating HGMS separator with its key operating parameters (magnetic induction, feed velocity and pulsating frequency) varied, and their capture selectivity characteristics were parallelly compared with that of a real 3.0 mm cylindrical matrix. It was found that the cylindrical single wires have superior capture selectivity to the rectangular one; and, the single wires and the real matrix have basically the same capture trend with changes in the key operating parameters, but the single wires have a much higher capture selectivity than that of real matrix.
Adaptive foveated single-pixel imaging with dynamic supersampling
Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.
2017-01-01
In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538
The optics inside an automated single molecule array analyzer
NASA Astrophysics Data System (ADS)
McGuigan, William; Fournier, David R.; Watson, Gary W.; Walling, Les; Gigante, Bill; Duffy, David C.; Rissin, David M.; Kan, Cheuk W.; Meyer, Raymond E.; Piech, Tomasz; Fishburn, Matthew W.
2014-02-01
Quanterix and Stratec Biomedical have developed an instrument that enables the automated measurement of multiple proteins at concentration ~1000 times lower than existing immunoassays. The instrument is based on Quanterix's proprietary Single Molecule Array technology (Simoa™ ) that facilitates the detection and quantification of biomarkers previously difficult to measure, thus opening up new applications in life science research and in-vitro diagnostics. Simoa is based on trapping individual beads in arrays of femtoliter-sized wells that, when imaged with sufficient resolution, allows for counting of single molecules associated with each bead. When used to capture and detect proteins, this approach is known as digital ELISA (Enzyme-linked immunosorbent assay). The platform developed is a merger of many science and engineering disciplines. This paper concentrates on the optical technologies that have enabled the development of a fully-automated single molecule analyzer. At the core of the system is a custom, wide field-of-view, fluorescence microscope that images arrays of microwells containing single molecules bound to magnetic beads. A consumable disc containing 24 microstructure arrays was developed previously in collaboration with Sony DADC. The system cadence requirements, array dimensions, and requirement to detect single molecules presented significant optical challenges. Specifically, the wide field-of-view needed to image the entire array resulted in the need for a custom objective lens. Additionally, cost considerations for the system required a custom solution that leveraged the image processing capabilities. This paper will discuss the design considerations and resultant optical architecture that has enabled the development of an automated digital ELISA platform.
Joint Labeling Of Multiple Regions of Interest (Rois) By Enhanced Auto Context Models.
Kim, Minjeong; Wu, Guorong; Guo, Yanrong; Shen, Dinggang
2015-04-01
Accurate segmentation of a set of regions of interest (ROIs) in the brain images is a key step in many neuroscience studies. Due to the complexity of image patterns, many learning-based segmentation methods have been proposed, including auto context model (ACM) that can capture high-level contextual information for guiding segmentation. However, since current ACM can only handle one ROI at a time, neighboring ROIs have to be labeled separately with different ACMs that are trained independently without communicating each other. To address this, we enhance the current single-ROI learning ACM to multi-ROI learning ACM for joint labeling of multiple neighboring ROIs (called e ACM). First, we extend current independently-trained single-ROI ACMs to a set of jointly-trained cross-ROI ACMs, by simultaneous training of ACMs for all spatially-connected ROIs to let them to share their respective intermediate outputs for coordinated labeling of each image point. Then, the context features in each ACM can capture the cross-ROI dependence information from the outputs of other ACMs that are designed for neighboring ROIs. Second, we upgrade the output labeling map of each ACM with the multi-scale representation, thus both local and global context information can be effectively used to increase the robustness in characterizing geometric relationship among neighboring ROIs. Third, we integrate ACM into a multi-atlases segmentation paradigm, for encompassing high variations among subjects. Experiments on LONI LPBA40 dataset show much better performance by our e ACM, compared to the conventional ACM.
Automated Meteor Detection by All-Sky Digital Camera Systems
NASA Astrophysics Data System (ADS)
Suk, Tomáš; Šimberová, Stanislava
2017-12-01
We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.
Preliminary results for mask metrology using spatial heterodyne interferometry
NASA Astrophysics Data System (ADS)
Bingham, Philip R.; Tobin, Kenneth; Bennett, Marylyn H.; Marmillion, Pat
2003-12-01
Spatial heterodyne interferometry (SHI) is an imaging technique that captures both the phase and amplitude of a complex wavefront in a single high-speed image. This technology was developed at the Oak Ridge National Laboratory (ORNL) and is currently being implemented for semiconductor wafer inspection by nLine Corporation. As with any system that measures phase, metrology and inspection of surface structures is possible by capturing a wavefront reflected from the surface. The interpretation of surface structure heights for metrology applications can become very difficult with the many layers of various materials used on semiconductor wafers, so inspection (defect detection) has been the primary focus for semiconductor wafers. However, masks used for photolithography typically only contain a couple well-defined materials opening the doors to high-speed mask metrology in 3 dimensions in addition to inspection. Phase shift masks often contain structures etched out of the transparent substrate material for phase shifting. While these structures are difficult to inspect using only intensity, the phase and amplitude images captured with SHI can produce very good resolution of these structures. The phase images also provide depth information that is crucial for these phase shift regions. Preliminary testing has been performed to determine the feasibility of SHI for high-speed non-contact mask metrology using a prototype SHI system with 532 nm wavelength illumination named the Visible Alpha Tool (VAT). These results show that prototype SHI system is capable of performing critical dimension measurements on 400nm lines with a repeatability of 1.4nm and line height measurements with a repeatability of 0.26nm. Additionally initial imaging of an alternating aperture phase shift mask has shown the ability of SHI to discriminate between typical phase shift heights.
Millius, Arthur; Watanabe, Naoki; Weiner, Orion D
2012-03-01
The SCAR/WAVE complex drives lamellipodium formation by enhancing actin nucleation by the Arp2/3 complex. Phosphoinositides and Rac activate the SCAR/WAVE complex, but how SCAR/WAVE and Arp2/3 complexes converge at sites of nucleation is unknown. We analyzed the single-molecule dynamics of WAVE2 and p40 (subunits of the SCAR/WAVE and Arp2/3 complexes, respectively) in XTC cells. We observed lateral diffusion of both proteins and captured the transition of p40 from diffusion to network incorporation. These results suggest that a diffusive 2D search facilitates binding of the Arp2/3 complex to actin filaments necessary for nucleation. After nucleation, the Arp2/3 complex integrates into the actin network and undergoes retrograde flow, which results in its broad distribution throughout the lamellipodium. By contrast, the SCAR/WAVE complex is more restricted to the cell periphery. However, with single-molecule imaging, we also observed WAVE2 molecules undergoing retrograde motion. WAVE2 and p40 have nearly identical speeds, lifetimes and sites of network incorporation. Inhibition of actin retrograde flow does not prevent WAVE2 association and disassociation with the membrane but does inhibit WAVE2 removal from the actin cortex. Our results suggest that membrane binding and diffusion expedites the recruitment of nucleation factors to a nucleation site independent of actin assembly, but after network incorporation, ongoing actin polymerization facilitates recycling of SCAR/WAVE and Arp2/3 complexes.
Kisielowski, C.; Frei, H.; Specht, P.; ...
2016-11-02
This article summarizes core aspects of beam-sample interactions in research that aims at exploiting the ability to detect single atoms at atomic resolution by mid-voltage transmission electron microscopy. Investigating the atomic structure of catalytic Co 3O 4 nanocrystals underscores how indispensable it is to rigorously control electron dose rates and total doses to understand native material properties on this scale. We apply in-line holography with variable dose rates to achieve this goal. Genuine object structures can be maintained if dose rates below ~100 e/Å 2s are used and the contrast required for detection of single atoms is generated by capturing largemore » image series. Threshold doses for the detection of single atoms are estimated. An increase of electron dose rates and total doses to common values for high resolution imaging of solids stimulates object excitations that restructure surfaces, interfaces, and defects and cause grain reorientation or growth. We observe a variety of previously unknown atom configurations in surface proximity of the Co 3O 4 spinel structure. These are hidden behind broadened diffraction patterns in reciprocal space but become visible in real space by solving the phase problem. Finallly, an exposure of the Co 3O 4 spinel structure to water vapor or other gases induces drastic structure alterations that can be captured in this manner.« less
Millius, Arthur; Watanabe, Naoki; Weiner, Orion D.
2012-01-01
The SCAR/WAVE complex drives lamellipodium formation by enhancing actin nucleation by the Arp2/3 complex. Phosphoinositides and Rac activate the SCAR/WAVE complex, but how SCAR/WAVE and Arp2/3 complexes converge at sites of nucleation is unknown. We analyzed the single-molecule dynamics of WAVE2 and p40 (subunits of the SCAR/WAVE and Arp2/3 complexes, respectively) in XTC cells. We observed lateral diffusion of both proteins and captured the transition of p40 from diffusion to network incorporation. These results suggest that a diffusive 2D search facilitates binding of the Arp2/3 complex to actin filaments necessary for nucleation. After nucleation, the Arp2/3 complex integrates into the actin network and undergoes retrograde flow, which results in its broad distribution throughout the lamellipodium. By contrast, the SCAR/WAVE complex is more restricted to the cell periphery. However, with single-molecule imaging, we also observed WAVE2 molecules undergoing retrograde motion. WAVE2 and p40 have nearly identical speeds, lifetimes and sites of network incorporation. Inhibition of actin retrograde flow does not prevent WAVE2 association and disassociation with the membrane but does inhibit WAVE2 removal from the actin cortex. Our results suggest that membrane binding and diffusion expedites the recruitment of nucleation factors to a nucleation site independent of actin assembly, but after network incorporation, ongoing actin polymerization facilitates recycling of SCAR/WAVE and Arp2/3 complexes. PMID:22349699
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-15
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-831] Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof; Commission Determination Not To Review an Initial... certain electronic devices for capturing and transmitting images, and components thereof. The complaint...
Pāhoehoe flow cooling, discharge, and coverage rates from thermal image chronometry
Dehn, Jonathan; Hamilton, Christopher M.; Harris, A. J. L.; Herd, Richard A.; James, M.R.; Lodato, Luigi; Steffke, Andrea
2007-01-01
Theoretically- and empirically-derived cooling rates for active pāhoehoe lava flows show that surface cooling is controlled by conductive heat loss through a crust that is thickening with the square root of time. The model is based on a linear relationship that links log(time) with surface cooling. This predictable cooling behavior can be used assess the age of recently emplaced sheet flows from their surface temperatures. Using a single thermal image, or image mosaic, this allows quantification of the variation in areal coverage rates and lava discharge rates over 48 hour periods prior to image capture. For pāhoehoe sheet flow at Kīlauea (Hawai`i) this gives coverage rates of 1–5 m2/min at discharge rates of 0.01–0.05 m3/s, increasing to ∼40 m2/min at 0.4–0.5 m3/s. Our thermal chronometry approach represents a quick and easy method of tracking flow advance over a three-day period using a single, thermal snap-shot.
Image super-resolution via adaptive filtering and regularization
NASA Astrophysics Data System (ADS)
Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming
2014-11-01
Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.
Status and outlook of CHIP-TRAP: The Central Michigan University high precision Penning trap
NASA Astrophysics Data System (ADS)
Redshaw, M.; Bryce, R. A.; Hawks, P.; Gamage, N. D.; Hunt, C.; Kandegedara, R. M. E. B.; Ratnayake, I. S.; Sharp, L.
2016-06-01
At Central Michigan University we are developing a high-precision Penning trap mass spectrometer (CHIP-TRAP) that will focus on measurements with long-lived radioactive isotopes. CHIP-TRAP will consist of a pair of hyperbolic precision-measurement Penning traps, and a cylindrical capture/filter trap in a 12 T magnetic field. Ions will be produced by external ion sources, including a laser ablation source, and transported to the capture trap at low energies enabling ions of a given m / q ratio to be selected via their time-of-flight. In the capture trap, contaminant ions will be removed with a mass-selective rf dipole excitation and the ion of interest will be transported to the measurement traps. A phase-sensitive image charge detection technique will be used for simultaneous cyclotron frequency measurements on single ions in the two precision traps, resulting in a reduction in statistical uncertainty due to magnetic field fluctuations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. However, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. The method is built on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified here using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how such an inspection would be made which can maintain high robustness and security. In particular, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
2017-01-01
We report an approach, named chemTEM, to follow chemical transformations at the single-molecule level with the electron beam of a transmission electron microscope (TEM) applied as both a tunable source of energy and a sub-angstrom imaging probe. Deposited on graphene, disk-shaped perchlorocoronene molecules are precluded from intermolecular interactions. This allows monomolecular transformations to be studied at the single-molecule level in real time and reveals chlorine elimination and reactive aryne formation as a key initial stage of multistep reactions initiated by the 80 keV e-beam. Under the same conditions, perchlorocoronene confined within a nanotube cavity, where the molecules are situated in very close proximity to each other, enables imaging of intermolecular reactions, starting with the Diels–Alder cycloaddition of a generated aryne, followed by rearrangement of the angular adduct to a planar polyaromatic structure and the formation of a perchlorinated zigzag nanoribbon of graphene as the final product. ChemTEM enables the entire process of polycondensation, including the formation of metastable intermediates, to be captured in a one-shot “movie”. A molecule with a similar size and shape but with a different chemical composition, octathio[8]circulene, under the same conditions undergoes another type of polycondensation via thiyl biradical generation and subsequent reaction leading to polythiophene nanoribbons with irregular edges incorporating bridging sulfur atoms. Graphene or carbon nanotubes supporting the individual molecules during chemTEM studies ensure that the elastic interactions of the molecules with the e-beam are the dominant forces that initiate and drive the reactions we image. Our ab initio DFT calculations explicitly incorporating the e-beam in the theoretical model correlate with the chemTEM observations and give a mechanism for direct control not only of the type of the reaction but also of the reaction rate. Selection of the appropriate e-beam energy and control of the dose rate in chemTEM enabled imaging of reactions on a time frame commensurate with TEM image capture rates, revealing atomistic mechanisms of previously unknown processes. PMID:28191929
Jones, Christopher P; Brenner, Ceri M; Stitt, Camilla A; Armstrong, Chris; Rusby, Dean R; Mirfayzi, Seyed R; Wilson, Lucy A; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M H; Clarke, Robert J; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John; McKenna, Paul; Neely, David; Kar, Satya; Scott, Thomas B
2016-11-15
A small scale sample nuclear waste package, consisting of a 28mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500keV), with a source size of <0.5mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30cm(2) scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned. Copyright © 2016. Published by Elsevier B.V.
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Improved grid-noise removal in single-frame digital moiré 3D shape measurement
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Kofman, Jonathan
2016-11-01
A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.
The Quanta Image Sensor: Every Photon Counts
Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel
2016-01-01
The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926
Wang, Lei; Pedersen, Peder C; Strong, Diane M; Tulu, Bengisu; Agu, Emmanuel; Ignotz, Ron; He, Qian
2015-08-07
For individuals with type 2 diabetes, foot ulcers represent a significant health issue. The aim of this study is to design and evaluate a wound assessment system to help wound clinics assess patients with foot ulcers in a way that complements their current visual examination and manual measurements of their foot ulcers. The physical components of the system consist of an image capture box, a smartphone for wound image capture and a laptop for analyzing the wound image. The wound image assessment algorithms calculate the overall wound area, color segmented wound areas, and a healing score, to provide a quantitative assessment of the wound healing status both for a single wound image and comparisons of subsequent images to an initial wound image. The system was evaluated by assessing foot ulcers for 12 patients in the Wound Clinic at University of Massachusetts Medical School. As performance measures, the Matthews correlation coefficient (MCC) value for the wound area determination algorithm tested on 32 foot ulcer images was .68. The clinical validity of our healing score algorithm relative to the experienced clinicians was measured by Krippendorff's alpha coefficient (KAC) and ranged from .42 to .81. Our system provides a promising real-time method for wound assessment based on image analysis. Clinical comparisons indicate that the optimized mean-shift-based algorithm is well suited for wound area determination. Clinical evaluation of our healing score algorithm shows its potential to provide clinicians with a quantitative method for evaluating wound healing status. © 2015 Diabetes Technology Society.
Improved resistivity imaging of groundwater solute plumes using POD-based inversion
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M.; Khan, T.
2012-12-01
We propose a new approach for enforcing physics-based regularization in electrical resistivity imaging (ERI) problems. The approach utilizes a basis-constrained inversion where an optimal set of basis vectors is extracted from training data by Proper Orthogonal Decomposition (POD). The key aspect of the approach is that Monte Carlo simulation of flow and transport is used to generate a training dataset, thereby intrinsically capturing the physics of the underlying flow and transport models in a non-parametric form. POD allows for these training data to be projected onto a subspace of the original domain, resulting in the extraction of a basis for the inversion that captures characteristics of the groundwater flow and transport system, while simultaneously allowing for dimensionality reduction of the original problem in the projected space We use two different synthetic transport scenarios in heterogeneous media to illustrate how the POD-based inversion compares with standard Tikhonov and coupled inversion. The first scenario had a single source zone leading to a unimodal solute plume (synthetic #1), whereas, the second scenario had two source zones that produced a bimodal plume (synthetic #2). For both coupled inversion and the POD approach, the conceptual flow and transport model used considered only a single source zone for both scenarios. Results were compared based on multiple metrics (concentration root-mean square error (RMSE), peak concentration, and total solute mass). In addition, results for POD inversion based on 3 different data densities (120, 300, and 560 data points) and varying number of selected basis images (100, 300, and 500) were compared. For synthetic #1, we found that all three methods provided qualitatively reasonable reproduction of the true plume. Quantitatively, the POD inversion performed best overall for each metric considered. Moreover, since synthetic #1 was consistent with the conceptual transport model, a small number of basis vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
Improving the image discontinuous problem by using color temperature mapping method
NASA Astrophysics Data System (ADS)
Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming
2011-09-01
This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.
NASA Astrophysics Data System (ADS)
Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake
2003-07-01
4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.
Single-Cell Analysis of [18F]Fluorodeoxyglucose Uptake by Droplet Radiofluidics.
Türkcan, Silvan; Nguyen, Julia; Vilalta, Marta; Shen, Bin; Chin, Frederick T; Pratx, Guillem; Abbyad, Paul
2015-07-07
Radiolabels can be used to detect small biomolecules with high sensitivity and specificity without interfering with the biochemical activity of the labeled molecule. For instance, the radiolabeled glucose analogue, [18F]fluorodeoxyglucose (FDG), is routinely used in positron emission tomography (PET) scans for cancer diagnosis, staging, and monitoring. However, despite their widespread usage, conventional radionuclide techniques are unable to measure the variability and modulation of FDG uptake in single cells. We present here a novel microfluidic technique, dubbed droplet radiofluidics, that can measure radiotracer uptake for single cells encapsulated into an array of microdroplets. The advantages of this approach are multiple. First, droplets can be quickly and easily positioned in a predetermined pattern for optimal imaging throughput. Second, droplet encapsulation reduces cell efflux as a confounding factor, because any effluxed radionuclide is trapped in the droplet. Last, multiplexed measurements can be performed using fluorescent labels. In this new approach, intracellular radiotracers are imaged on a conventional fluorescence microscope by capturing individual flashes of visible light that are produced as individual positrons, emitted during radioactive decay, traverse a scintillator plate placed below the cells. This method is used to measure the cell-to-cell heterogeneity in the uptake of tracers such as FDG in cell lines and cultured primary cells. The capacity of the platform to perform multiplexed measurements was demonstrated by measuring differential FDG uptake in single cells subjected to different incubation conditions and expressing different types of glucose transporters. This method opens many new avenues of research in basic cell biology and human disease by capturing the full range of stochastic variations in highly heterogeneous cell populations in a repeatable and high-throughput manner.
NASA Astrophysics Data System (ADS)
Großerueschkamp, Frederik; Bracht, Thilo; Diehl, Hanna C.; Kuepper, Claus; Ahrens, Maike; Kallenbach-Thieltges, Angela; Mosig, Axel; Eisenacher, Martin; Marcus, Katrin; Behrens, Thomas; Brüning, Thomas; Theegarten, Dirk; Sitek, Barbara; Gerwert, Klaus
2017-03-01
Diffuse malignant mesothelioma (DMM) is a heterogeneous malignant neoplasia manifesting with three subtypes: epithelioid, sarcomatoid and biphasic. DMM exhibit a high degree of spatial heterogeneity that complicates a thorough understanding of the underlying different molecular processes in each subtype. We present a novel approach to spatially resolve the heterogeneity of a tumour in a label-free manner by integrating FTIR imaging and laser capture microdissection (LCM). Subsequent proteome analysis of the dissected homogenous samples provides in addition molecular resolution. FTIR imaging resolves tumour subtypes within tissue thin-sections in an automated and label-free manner with accuracy of about 85% for DMM subtypes. Even in highly heterogeneous tissue structures, our label-free approach can identify small regions of interest, which can be dissected as homogeneous samples using LCM. Subsequent proteome analysis provides a location specific molecular characterization. Applied to DMM subtypes, we identify 142 differentially expressed proteins, including five protein biomarkers commonly used in DMM immunohistochemistry panels. Thus, FTIR imaging resolves not only morphological alteration within tissue but it resolves even alterations at the level of single proteins in tumour subtypes. Our fully automated workflow FTIR-guided LCM opens new avenues collecting homogeneous samples for precise and predictive biomarkers from omics studies.
Großerueschkamp, Frederik; Bracht, Thilo; Diehl, Hanna C; Kuepper, Claus; Ahrens, Maike; Kallenbach-Thieltges, Angela; Mosig, Axel; Eisenacher, Martin; Marcus, Katrin; Behrens, Thomas; Brüning, Thomas; Theegarten, Dirk; Sitek, Barbara; Gerwert, Klaus
2017-03-30
Diffuse malignant mesothelioma (DMM) is a heterogeneous malignant neoplasia manifesting with three subtypes: epithelioid, sarcomatoid and biphasic. DMM exhibit a high degree of spatial heterogeneity that complicates a thorough understanding of the underlying different molecular processes in each subtype. We present a novel approach to spatially resolve the heterogeneity of a tumour in a label-free manner by integrating FTIR imaging and laser capture microdissection (LCM). Subsequent proteome analysis of the dissected homogenous samples provides in addition molecular resolution. FTIR imaging resolves tumour subtypes within tissue thin-sections in an automated and label-free manner with accuracy of about 85% for DMM subtypes. Even in highly heterogeneous tissue structures, our label-free approach can identify small regions of interest, which can be dissected as homogeneous samples using LCM. Subsequent proteome analysis provides a location specific molecular characterization. Applied to DMM subtypes, we identify 142 differentially expressed proteins, including five protein biomarkers commonly used in DMM immunohistochemistry panels. Thus, FTIR imaging resolves not only morphological alteration within tissue but it resolves even alterations at the level of single proteins in tumour subtypes. Our fully automated workflow FTIR-guided LCM opens new avenues collecting homogeneous samples for precise and predictive biomarkers from omics studies.
A brain electrical signature of left-lateralized semantic activation from single words.
Koppehele-Gossel, Judith; Schnuerch, Robert; Gibbons, Henning
2016-01-01
Lesion and imaging studies consistently indicate a left-lateralization of semantic language processing in human temporo-parietal cortex. Surprisingly, electrocortical measures, which allow a direct assessment of brain activity and the tracking of cognitive functions with millisecond precision, have not yet been used to capture this hemispheric lateralization, at least with respect to posterior portions of this effect. Using event-related potentials, we employed a simple single-word reading paradigm to compare neural activity during three tasks requiring different degrees of semantic processing. As expected, we were able to derive a simple temporo-parietal left-right asymmetry index peaking around 300ms into word processing that neatly tracks the degree of semantic activation. The validity of this measure in specifically capturing verbal semantic activation was further supported by a significant relation to verbal intelligence. We thus posit that it represents a promising tool to monitor verbal semantic processing in the brain with little technological effort and in a minimal experimental setup. Copyright © 2016 Elsevier Inc. All rights reserved.
Accurate Morphology Preserving Segmentation of Overlapping Cells based on Active Contours
Molnar, Csaba; Jermyn, Ian H.; Kato, Zoltan; Rahkama, Vesa; Östling, Päivi; Mikkonen, Piia; Pietiäinen, Vilja; Horvath, Peter
2016-01-01
The identification of fluorescently stained cell nuclei is the basis of cell detection, segmentation, and feature extraction in high content microscopy experiments. The nuclear morphology of single cells is also one of the essential indicators of phenotypic variation. However, the cells used in experiments can lose their contact inhibition, and can therefore pile up on top of each other, making the detection of single cells extremely challenging using current segmentation methods. The model we present here can detect cell nuclei and their morphology even in high-confluency cell cultures with many overlapping cell nuclei. We combine the “gas of near circles” active contour model, which favors circular shapes but allows slight variations around them, with a new data model. This captures a common property of many microscopic imaging techniques: the intensities from superposed nuclei are additive, so that two overlapping nuclei, for example, have a total intensity that is approximately double the intensity of a single nucleus. We demonstrate the power of our method on microscopic images of cells, comparing the results with those obtained from a widely used approach, and with manual image segmentations by experts. PMID:27561654
NASA Astrophysics Data System (ADS)
Bogan, Michael J.; Starodub, Dmitri; Hampton, Christina Y.; Sierra, Raymond G.
2010-10-01
The first of its kind, the Free electron LASer facility in Hamburg, FLASH, produces soft x-ray pulses with unprecedented properties (10 fs, 6.8-47 nm, 1012 photons per pulse, 20 µm diameter). One of the seminal FLASH experiments is single-pulse coherent x-ray diffractive imaging (CXDI). CXDI utilizes the ultrafast and ultrabright pulses to overcome resolution limitations in x-ray microscopy imposed by x-ray-induced damage to the sample by 'diffracting before destroying' the sample on sub-picosecond timescales. For many lensless imaging algorithms used for CXDI it is convenient when the data satisfy an oversampling constraint that requires the sample to be an isolated object, i.e. an individual 'free-standing' portion of disordered matter delivered to the centre of the x-ray focus. By definition, this type of matter is an aerosol. This paper will describe the role of aerosol science methodologies used for the validation of the 'diffract before destroy' hypothesis and the execution of the first single-particle CXDI experiments being developed for biological imaging. FLASH CXDI now enables the highest resolution imaging of single micron-sized or smaller airborne particulate matter to date while preserving the native substrate-free state of the aerosol. Electron microscopy offers higher resolution for single-particle analysis but the aerosol must be captured on a substrate, potentially modifying the particle morphology. Thus, FLASH is poised to contribute significant advancements in our knowledge of aerosol morphology and dynamics. As an example, we simulate CXDI of combustion particle (soot) morphology and introduce the concept of extracting radius of gyration of fractal aggregates from single-pulse x-ray diffraction data. Future upgrades to FLASH will enable higher spatially and temporally resolved single-particle aerosol dynamics studies, filling a critical technological need in aerosol science and nanotechnology. Many of the methodologies described for FLASH will directly translate to use at hard x-ray free electron lasers.
Linking brain, mind and behavior.
Makeig, Scott; Gramann, Klaus; Jung, Tzyy-Ping; Sejnowski, Terrence J; Poizner, Howard
2009-08-01
Cortical brain areas and dynamics evolved to organize motor behavior in our three-dimensional environment also support more general human cognitive processes. Yet traditional brain imaging paradigms typically allow and record only minimal participant behavior, then reduce the recorded data to single map features of averaged responses. To more fully investigate the complex links between distributed brain dynamics and motivated natural behavior, we propose the development of wearable mobile brain/body imaging (MoBI) systems that continuously capture the wearer's high-density electrical brain and muscle signals, three-dimensional body movements, audiovisual scene and point of regard, plus new data-driven analysis methods to model their interrelationships. The new imaging modality should allow new insights into how spatially distributed brain dynamics support natural human cognition and agency.
Hybrid-modality ocular imaging using a clinical ultrasound system and nanosecond pulsed laser.
Lim, Hoong-Ta; Matham, Murukeshan Vadakke
2015-07-01
Hybrid optical modality imaging is a special type of multimodality imaging significantly used in the recent past in order to harness the strengths of different imaging methods as well as to furnish complementary information beyond that provided by any individual method. We present a hybrid-modality imaging system based on a commercial clinical ultrasound imaging (USI) system using a linear array ultrasound transducer (UST) and a tunable nanosecond pulsed laser as the source. The integrated system uses photoacoustic imaging (PAI) and USI for ocular imaging to provide the complementary absorption and structural information of the eye. In this system, B-mode images from PAI and USI are acquired at 10 Hz and about 40 Hz, respectively. A linear array UST makes the system much faster compared to other ocular imaging systems using a single-element UST to form B-mode images. The results show that the proposed instrumentation is able to incorporate PAI and USI in a single setup. The feasibility and efficiency of this developed probe system was illustrated by using enucleated pig eyes as test samples. It was demonstrated that PAI could successfully capture photoacoustic signals from the iris, anterior lens surface, and posterior pole, while USI could accomplish the mapping of the eye to reveal the structures like the cornea, anterior chamber, lens, iris, and posterior pole. This system and the proposed methodology are expected to enable ocular disease diagnostic applications and can be used as a preclinical imaging system.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro
2012-09-10
We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
Kroll, Alexandra; Haramagatti, Chandrashekara R.; Lipinski, Hans-Gerd; Wiemann, Martin
2017-01-01
Darkfield and confocal laser scanning microscopy both allow for a simultaneous observation of live cells and single nanoparticles. Accordingly, a characterization of nanoparticle uptake and intracellular mobility appears possible within living cells. Single particle tracking allows to measure the size of a diffusing particle close to a cell. However, within the more complex system of a cell’s cytoplasm normal, confined or anomalous diffusion together with directed motion may occur. In this work we present a method to automatically classify and segment single trajectories into their respective motion types. Single trajectories were found to contain more than one motion type. We have trained a random forest with 9 different features. The average error over all motion types for synthetic trajectories was 7.2%. The software was successfully applied to trajectories of positive controls for normal- and constrained diffusion. Trajectories captured by nanoparticle tracking analysis served as positive control for normal diffusion. Nanoparticles inserted into a diblock copolymer membrane was used to generate constrained diffusion. Finally we segmented trajectories of diffusing (nano-)particles in V79 cells captured with both darkfield- and confocal laser scanning microscopy. The software called “TraJClassifier” is freely available as ImageJ/Fiji plugin via https://git.io/v6uz2. PMID:28107406
Current Status of Single Particle Imaging with X-ray Lasers
Sun, Zhibin; Fan, Jiadong; Li, Haoyuan; ...
2018-01-22
The advent of ultrafast X-ray free-electron lasers (XFELs) opens the tantalizing possibility of the atomic-resolution imaging of reproducible objects such as viruses, nanoparticles, single molecules, clusters, and perhaps biological cells, achieving a resolution for single particle imaging better than a few tens of nanometers. Improving upon this is a significant challenge which has been the focus of a global single particle imaging (SPI) initiative launched in December 2014 at the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, USA. A roadmap was outlined, and significant multi-disciplinary effort has since been devoted to work on the technical challenges of SPImore » such as radiation damage, beam characterization, beamline instrumentation and optics, sample preparation and delivery and algorithm development at multiple institutions involved in the SPI initiative. Currently, the SPI initiative has achieved 3D imaging of rice dwarf virus (RDV) and coliphage PR772 viruses at ~10 nm resolution by using soft X-ray FEL pulses at the Atomic Molecular and Optical (AMO) instrument of LCLS. Meanwhile, diffraction patterns with signal above noise up to the corner of the detector with a resolution of ~6 Ångström (Å) were also recorded with hard X-rays at the Coherent X-ray Imaging (CXI) instrument, also at LCLS. Achieving atomic resolution is truly a grand challenge and there is still a long way to go in light of recent developments in electron microscopy. However, the potential for studying dynamics at physiological conditions and capturing ultrafast biological, chemical and physical processes represents a tremendous potential application, attracting continued interest in pursuing further method development. In this paper, we give a brief introduction of SPI developments and look ahead to further method development.« less
Current Status of Single Particle Imaging with X-ray Lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Zhibin; Fan, Jiadong; Li, Haoyuan
The advent of ultrafast X-ray free-electron lasers (XFELs) opens the tantalizing possibility of the atomic-resolution imaging of reproducible objects such as viruses, nanoparticles, single molecules, clusters, and perhaps biological cells, achieving a resolution for single particle imaging better than a few tens of nanometers. Improving upon this is a significant challenge which has been the focus of a global single particle imaging (SPI) initiative launched in December 2014 at the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, USA. A roadmap was outlined, and significant multi-disciplinary effort has since been devoted to work on the technical challenges of SPImore » such as radiation damage, beam characterization, beamline instrumentation and optics, sample preparation and delivery and algorithm development at multiple institutions involved in the SPI initiative. Currently, the SPI initiative has achieved 3D imaging of rice dwarf virus (RDV) and coliphage PR772 viruses at ~10 nm resolution by using soft X-ray FEL pulses at the Atomic Molecular and Optical (AMO) instrument of LCLS. Meanwhile, diffraction patterns with signal above noise up to the corner of the detector with a resolution of ~6 Ångström (Å) were also recorded with hard X-rays at the Coherent X-ray Imaging (CXI) instrument, also at LCLS. Achieving atomic resolution is truly a grand challenge and there is still a long way to go in light of recent developments in electron microscopy. However, the potential for studying dynamics at physiological conditions and capturing ultrafast biological, chemical and physical processes represents a tremendous potential application, attracting continued interest in pursuing further method development. In this paper, we give a brief introduction of SPI developments and look ahead to further method development.« less
High-Rate Data-Capture for an Airborne Lidar System
NASA Technical Reports Server (NTRS)
Valett, Susan; Hicks, Edward; Dabney, Philip; Harding, David
2012-01-01
A high-rate data system was required to capture the data for an airborne lidar system. A data system was developed that achieved up to 22 million (64-bit) events per second sustained data rate (1408 million bits per second), as well as short bursts (less than 4 s) at higher rates. All hardware used for the system was off the shelf, but carefully selected to achieve these rates. The system was used to capture laser fire, single-photon detection, and GPS data for the Slope Imaging Multi-polarization Photo-counting Lidar (SIMPL). However, the system has applications for other laser altimeter systems (waveform-recording), mass spectroscopy, xray radiometry imaging, high-background- rate ranging lidar, and other similar areas where very high-speed data capture is needed. The data capture software was used for the SIMPL instrument that employs a micropulse, single-photon ranging measurement approach and has 16 data channels. The detected single photons are from two sources those reflected from the target and solar background photons. The instrument is non-gated, so background photons are acquired for a range window of 13 km and can comprise many times the number of target photons. The highest background rate occurs when the atmosphere is clear, the Sun is high, and the target is a highly reflective surface such as snow. Under these conditions, the total data rate for the 16 channels combined is expected to be approximately 22 million events per second. For each photon detection event, the data capture software reads the relative time of receipt, with respect to a one-per-second absolute time pulse from a GPS receiver, from an event timer card with 0.1-ns precision, and records that information to a RAID (Redundant Array of Independent Disks) storage device. The relative time of laser pulse firings must also be read and recorded with the same precision. Each of the four event timer cards handles the throughput from four of the channels. For each detection event, a flag is recorded that indicates the source channel. To accommodate the expected maximum count rate and also handle the other extreme of very low rates occurring during nighttime operations, the software requests a set amount of data from each of the event timer cards and buffers the data. The software notes if any of the cards did not return all the data requested and then accommodates that lower rate. The data is buffered to minimize the I/O overhead of writing the data to storage. Care was taken to optimize the reads from the cards, the speed of the I/O bus, and RAID configuration.
Development of a vision-based pH reading system
NASA Astrophysics Data System (ADS)
Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon
2015-10-01
pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.
Image processing system design for microcantilever-based optical readout infrared arrays
NASA Astrophysics Data System (ADS)
Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu
2012-12-01
Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.
Intelligent image capture of cartridge cases for firearms examiners
NASA Astrophysics Data System (ADS)
Jones, Brett C.; Guerci, Joseph R.
1997-02-01
The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.
An Embedded Microretroreflector-Based Microfluidic Immunoassay Platform
Raja, Balakrishnan; Pascente, Carmen; Knoop, Jennifer; Shakarisaz, David; Sherlock, Tim; Kemper, Steven; Kourentzi, Katerina; Renzi, Ronald F.; Hatch, Anson V.; Olano, Juan; Peng, Bi-Hung; Ruchhoeft, Paul; Willson, Richard
2017-01-01
We present a microfluidic immunoassay platform based on the use of linear microretroreflectors embedded in a transparent polymer layer as an optical sensing surface, and micron-sized magnetic particles as light-blocking labels. Retroreflectors return light directly to its source and are highly detectable using inexpensive optics. The analyte is immuno-magnetically pre-concentrated from a sample and then captured on an antibody-modified microfluidic substrate comprised of embedded microretroreflectors, thereby blocking reflected light. Fluidic force discrimination is used to increase specificity of the assay, following which a difference imaging algorithm that can see single 3 μm magnetic particles without optical calibration is used to detect and quantify signal intensity from each sub-array of retroreflectors. We demonstrate the utility of embedded microretroreflectors as a new sensing modality through a proof-of-concept immunoassay for a small, obligate intracellular bacterial pathogen, Rickettsia conorii, the causative agent of Mediterranean Spotted Fever. The combination of large sensing area, optimized surface chemistry and microfluidic protocols, automated image capture and analysis, and high sensitivity of the difference imaging results in a sensitive immunoassay with a limit of detection of roughly 4000 R. conorii per mL. PMID:27025227
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
Tug-of-war lacunarity—A novel approach for estimating lacunarity
NASA Astrophysics Data System (ADS)
Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut
2016-11-01
Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.
Optic probe for multiple angle image capture and optional stereo imaging
Malone, Robert M.; Kaufman, Morris I.
2016-11-29
A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.
Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf
2016-01-01
One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.
Lattice algebra approach to multispectral analysis of ancient documents.
Valdiviezo-N, Juan C; Urcid, Gonzalo
2013-02-01
This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.
Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A
2016-08-01
Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-27
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.
Remote Measurements of Heart and Respiration Rates for Telemedicine
Qian, Yi; Tsien, Joe Z.
2013-01-01
Non-contact and low-cost measurements of heart and respiration rates are highly desirable for telemedicine. Here, we describe a novel technique to extract blood volume pulse and respiratory wave from a single channel images captured by a video camera for both day and night conditions. The principle of our technique is to uncover the temporal dynamics of heart beat and breathing rate through delay-coordinate transformation and independent component analysis-based deconstruction of the single channel images. Our method further achieves robust elimination of false positives via applying ratio-variation probability distributions filtering approaches. Moreover, it enables a much needed low-cost means for preventing sudden infant death syndrome in new born infants and detecting stroke and heart attack in elderly population in home environments. This noncontact-based method can also be applied to a variety of animal model organisms for biomedical research. PMID:24115996
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-01
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282
Early melanoma diagnosis with mobile imaging.
Do, Thanh-Toan; Zhou, Yiren; Zheng, Haitian; Cheung, Ngai-Man; Koh, Dawn
2014-01-01
We research a mobile imaging system for early diagnosis of melanoma. Different from previous work, we focus on smartphone-captured images, and propose a detection system that runs entirely on the smartphone. Smartphone-captured images taken under loosely-controlled conditions introduce new challenges for melanoma detection, while processing performed on the smartphone is subject to computation and memory constraints. To address these challenges, we propose to localize the skin lesion by combining fast skin detection and fusion of two fast segmentation results. We propose new features to capture color variation and border irregularity which are useful for smartphone-captured images. We also propose a new feature selection criterion to select a small set of good features used in the final lightweight system. Our evaluation confirms the effectiveness of proposed algorithms and features. In addition, we present our system prototype which computes selected visual features from a user-captured skin lesion image, and analyzes them to estimate the likelihood of malignance, all on an off-the-shelf smartphone.
A Method for Imaging Oxygen Distribution and Respiration at a Microscopic Level of Resolution.
Rolletschek, Hardy; Liebsch, Gregor
2017-01-01
Conventional oxygen (micro-) sensors assess oxygen concentration within a particular region or across a transect of tissue, but provide no information regarding its bidimensional distribution. Here, a novel imaging technology is presented, in which an optical sensor foil (i.e., the planar optode) is attached to the surface of the sample. The sensor converts a fluorescent signal into an oxygen value. Since each single image captures an entire area of the sample surface, the system is able to deduce the distribution of oxygen at a resolution level of few micrometers. It can be deployed to dynamically monitor oxygen consumption, thereby providing a detailed respiration map at close to cellular resolution. Here, we demonstrate the application of the imaging tool to developing plant seeds; the protocol is explained step by step and some potential pitfalls are discussed.
Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.
Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R
2016-11-20
Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.
Evaluation of image deblurring methods via a classification metric
NASA Astrophysics Data System (ADS)
Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo
2012-09-01
The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.
Crystal surface analysis using matrix textural features classified by a probabilistic neural network
NASA Astrophysics Data System (ADS)
Sawyer, Curry R.; Quach, Viet; Nason, Donald; van den Berg, Lodewijk
1991-12-01
A system is under development in which surface quality of a growing bulk mercuric iodide crystal is monitored by video camera at regular intervals for early detection of growth irregularities. Mercuric iodide single crystals are employed in radiation detectors. A microcomputer system is used for image capture and processing. The digitized image is divided into multiple overlapping sub-images and features are extracted from each sub-image based on statistical measures of the gray tone distribution, according to the method of Haralick. Twenty parameters are derived from each sub-image and presented to a probabilistic neural network (PNN) for classification. This number of parameters was found to be optimal for the system. The PNN is a hierarchical, feed-forward network that can be rapidly reconfigured as additional training data become available. Training data is gathered by reviewing digital images of many crystals during their growth cycle and compiling two sets of images, those with and without irregularities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toriello, Nicholas M.; Douglas, Erik S.; Mathies, Richard A.
A microchip that performs directed capture and chemical activation of surface-modified single-cells has been developed. The cell-capture system is comprised of interdigitated gold electrodes microfabricated on a glass substrate within PDMS channels. The cell surface is labeled with thiol functional groups using endogenous RGD receptors and adhesion to exposed gold pads on the electrodes is directed by applying a driving electric potential. Multiple cell types can thus be sequentially and selectively captured on desired electrodes. Single-cell capture efficiency is optimized by varying the duration of field application. Maximum single-cell capture is attained for the 10 min trial, with 63+-9 percentmore » (n=30) of the electrode pad rows having a single cell. In activation studies, single M1WT3 CHO cells loaded with the calcium-sensitive dye fluo-4 AM were captured; exposure to the muscarinic agonist carbachol increased the fluorescence to 220+-74percent (n=79) of the original intensity. These results demonstrate the ability to direct the adhesion of selected living single cells on electrodes in a microfluidic device and to analyze their response to chemical stimuli.« less
An electronic pan/tilt/zoom camera system
NASA Technical Reports Server (NTRS)
Zimmermann, Steve; Martin, H. Lee
1991-01-01
A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.
Processing, Cataloguing and Distribution of Uas Images in Near Real Time
NASA Astrophysics Data System (ADS)
Runkel, I.
2013-08-01
Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images can be checked and interpreted in near real-time. For sensible areas it gives you the possibility to inform remote decision makers or interpretation experts in order to provide them situations awareness, wherever they are. For monitoring and inspection tasks it speeds up the process of data capture and data interpretation. The fully automated workflow of data pre-processing, data georeferencing, data cataloguing and data dissemination in near real time was developed based on the Intergraph products ERDAS IMAGINE, ERDAS APOLLO and GEOSYSTEMS METAmorph!IT. It is offered as adaptable solution by GEOSYSTEMS GmbH.
Comparison and evaluation of datasets for off-angle iris recognition
NASA Astrophysics Data System (ADS)
Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut
2016-05-01
In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.
Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi
2014-10-20
We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.
Performance assessment of a compressive sensing single-pixel imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2017-04-01
Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains.
Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-11-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains
Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-01-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding. PMID:27814363
Design and Construction of a Field Capable Snapshot Hyperspectral Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Arik, Glenda H.
2005-01-01
The computed-tomography imaging spectrometer (CTIS) is a device which captures the spatial and spectral content of a rapidly evolving same in a single image frame. The most recent CTIS design is optically all reflective and uses as its dispersive device a stated the-art reflective computer generated hologram (CGH). This project focuses on the instrument's transition from laboratory to field. This design will enable the CTIS to withstand a harsh desert environment. The system is modeled in optical design software using a tolerance analysis. The tolerances guide the design of the athermal mount and component parts. The parts are assembled into a working mount shell where the performance of the mounts is tested for thermal integrity. An interferometric analysis of the reflective CGH is also performed.
Morimoto, Atsushi; Mogami, Toshifumi; Watanabe, Masaru; Iijima, Kazuki; Akiyama, Yasuyuki; Katayama, Koji; Futami, Toru; Yamamoto, Nobuyuki; Sawada, Takeshi; Koizumi, Fumiaki; Koh, Yasuhiro
2015-01-01
Development of a reliable platform and workflow to detect and capture a small number of mutation-bearing circulating tumor cells (CTCs) from a blood sample is necessary for the development of noninvasive cancer diagnosis. In this preclinical study, we aimed to develop a capture system for molecular characterization of single CTCs based on high-density dielectrophoretic microwell array technology. Spike-in experiments using lung cancer cell lines were conducted. The microwell array was used to capture spiked cancer cells, and captured single cells were subjected to whole genome amplification followed by sequencing. A high detection rate (70.2%-90.0%) and excellent linear performance (R2 = 0.8189-0.9999) were noted between the observed and expected numbers of tumor cells. The detection rate was markedly higher than that obtained using the CellSearch system in a blinded manner, suggesting the superior sensitivity of our system in detecting EpCAM- tumor cells. Isolation of single captured tumor cells, followed by detection of EGFR mutations, was achieved using Sanger sequencing. Using a microwell array, we established an efficient and convenient platform for the capture and characterization of single CTCs. The results of a proof-of-principle preclinical study indicated that this platform has potential for the molecular characterization of captured CTCs from patients.
NASA Astrophysics Data System (ADS)
Abu-Zaid, N. A. M.
2017-11-01
In many circumstances, it is difficult for humans to reach some areas, due to its topography, personal safety, or security regulations in the country. Governments and persons need to calculate those areas and classify the green parts for reclamation to benefit from it.To solve this problem, this research proposes to use a phantom air plane to capture a digital image for the targeted area, then use a segmentation algorithm to separate the green space and calculate it's area. It was necessary to deal with two problems. The first is the variable elevation at which an image was taken, which leads to a change in the physical area of each pixel. To overcome this problem a fourth degree polynomial was fit to some experimental data. The second problem was the existence of different unconnected pieces of green areas in a single image, but we might be interested only in one of them. To solve this problem, the probability of classifying the targeted area as green was increased, while the probability of other untargeted sections was decreased by the inclusion of parts of it as non-green. A practical law was also devised to measure the target area in the digital image for comparison purposes with practical measurements and the polynomial fit.
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
NASA SOFIA Captures Images of the Planetary Nebula M2-9
2012-03-29
Researchers using NASA Stratospheric Observatory for Infrared Astronomy SOFIA have captured infrared images of the last exhalations of a dying sun-like star. This image is of the planetary Nebula M2-9.
Single-Cell RNA Sequencing of Glioblastoma Cells.
Sen, Rajeev; Dolgalev, Igor; Bayin, N Sumru; Heguy, Adriana; Tsirigos, Aris; Placantonakis, Dimitris G
2018-01-01
Single-cell RNA sequencing (sc-RNASeq) is a recently developed technique used to evaluate the transcriptome of individual cells. As opposed to conventional RNASeq in which entire populations are sequenced in bulk, sc-RNASeq can be beneficial when trying to better understand gene expression patterns in markedly heterogeneous populations of cells or when trying to identify transcriptional signatures of rare cells that may be underrepresented when using conventional bulk RNASeq. In this method, we describe the generation and analysis of cDNA libraries from single patient-derived glioblastoma cells using the C1 Fluidigm system. The protocol details the use of the C1 integrated fluidics circuit (IFC) for capturing, imaging and lysing cells; performing reverse transcription; and generating cDNA libraries that are ready for sequencing and analysis.
Single-Molecule Real-Time 3D Imaging of the Transcription Cycle by Modulation Interferometry.
Wang, Guanshi; Hauver, Jesse; Thomas, Zachary; Darst, Seth A; Pertsinidis, Alexandros
2016-12-15
Many essential cellular processes, such as gene control, employ elaborate mechanisms involving the coordination of large, multi-component molecular assemblies. Few structural biology tools presently have the combined spatial-temporal resolution and molecular specificity required to capture the movement, conformational changes, and subunit association-dissociation kinetics, three fundamental elements of how such intricate molecular machines work. Here, we report a 3D single-molecule super-resolution imaging study using modulation interferometry and phase-sensitive detection that achieves <2 nm axial localization precision, well below the few-nanometer-sized individual protein components. To illustrate the capability of this technique in probing the dynamics of complex macromolecular machines, we visualize the movement of individual multi-subunit E. coli RNA polymerases through the complete transcription cycle, dissect the kinetics of the initiation-elongation transition, and determine the fate of σ 70 initiation factors during promoter escape. Modulation interferometry sets the stage for single-molecule studies of several hitherto difficult-to-investigate multi-molecular transactions that underlie genome regulation. Copyright © 2016 Elsevier Inc. All rights reserved.
Analysis of MCNP simulated gamma spectra of CdTe detectors for boron neutron capture therapy.
Winkler, Alexander; Koivunoro, Hanna; Savolainen, Sauli
2017-06-01
The next step in the boron neutron capture therapy (BNCT) is the real time imaging of the boron concentration in healthy and tumor tissue. Monte Carlo simulations are employed to predict the detector response required to realize single-photon emission computed tomography in BNCT, but have failed to correctly resemble measured data for cadmium telluride detectors. In this study we have tested the gamma production cross-section data tables of commonly used libraries in the Monte Carlo code MCNP in comparison to measurements. The cross section data table TENDL-2008-ACE is reproducing measured data best, whilst the commonly used ENDL92 and other studied libraries do not include correct tables for the gamma production from the cadmium neutron capture reaction that is occurring inside the detector. Furthermore, we have discussed the size of the annihilation peaks of spectra obtained by cadmium telluride and germanium detectors. Copyright © 2017 Elsevier Ltd. All rights reserved.
The effect of multispectral image fusion enhancement on human efficiency.
Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M
2017-01-01
The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.
NASA Astrophysics Data System (ADS)
Cornelissen, Frans; De Backer, Steve; Lemeire, Jan; Torfs, Berf; Nuydens, Rony; Meert, Theo; Schelkens, Peter; Scheunders, Paul
2008-08-01
Peripheral neuropathy can be caused by diabetes or AIDS or be a side-effect of chemotherapy. Fibered Fluorescence Microscopy (FFM) is a recently developed imaging modality using a fiber optic probe connected to a laser scanning unit. It allows for in-vivo scanning of small animal subjects by moving the probe along the tissue surface. In preclinical research, FFM enables non-invasive, longitudinal in vivo assessment of intra epidermal nerve fibre density in various models for peripheral neuropathies. By moving the probe, FFM allows visualization of larger surfaces, since, during the movement, images are continuously captured, allowing to acquire an area larger then the field of view of the probe. For analysis purposes, we need to obtain a single static image from the multiple overlapping frames. We introduce a mosaicing procedure for this kind of video sequence. Construction of mosaic images with sub-pixel alignment is indispensable and must be integrated into a global consistent image aligning. An additional motivation for the mosaicing is the use of overlapping redundant information to improve the signal to noise ratio of the acquisition, because the individual frames tend to have both high noise levels and intensity inhomogeneities. For longitudinal analysis, mosaics captured at different times must be aligned as well. For alignment, global correlation-based matching is compared with interest point matching. Use of algorithms working on multiple CPU's (parallel processor/cluster/grid) is imperative for use in a screening model.
A method for the automated processing and analysis of images of ULVWF-platelet strings.
Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V
2013-01-01
We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.
NASA Astrophysics Data System (ADS)
Noda, Masafumi; Takahashi, Tomokazu; Deguchi, Daisuke; Ide, Ichiro; Murase, Hiroshi; Kojima, Yoshiko; Naito, Takashi
In this study, we propose a method for detecting road markings recorded in an image captured by an in-vehicle camera by using a position-dependent classifier. Road markings are symbols painted on the road surface that help in preventing traffic accidents and in ensuring traffic smooth. Therefore, driver support systems for detecting road markings, such as a system that provides warning in the case when traffic signs are overlooked, and supporting the stopping of a vehicle are required. It is difficult to detect road markings because their appearance changes with the actual traffic conditions, e. g. the shape and resolution change. The variation in these appearances depend on the positional relation between the vehicle and the road markings, and on the vehicle posture. Although these variations are quite large in an entire image, they are relatively small in a local area of the image. Therefore, we try to improve the detection performance by taking into account the local variations in these appearances. We propose a method in which a position-dependent classifier is used to detect road markings recorded in images captured by an in-vehicle camera. Further, to train the classifier efficiently, we propose a generative learning method that takes into consideration the positional relation between the vehicle and road markings, and also the vehicle posture. Experimental results showed that the detection performance when the proposed method was used was better than when a method involving a single classifier was used.
Bogolon-mediated electron capture by impurities in hybrid Bose-Fermi systems
NASA Astrophysics Data System (ADS)
Boev, M. V.; Kovalev, V. M.; Savenko, I. G.
2018-04-01
We investigate the processes of electron capture by a Coulomb impurity center residing in a hybrid system consisting of spatially separated two-dimensional layers of electron and Bose-condensed dipolar exciton gases coupled via the Coulomb forces. We calculate the probability of the electron capture accompanied by the emission of a single Bogoliubov excitation (bogolon), similar to regular phonon-mediated scattering in solids. Furthermore, we study the electron capture mediated by the emission of a pair of bogolons in a single capture event and show that these processes not only should be treated in the same order of the perturbation theory, but also they give a more important contribution than single-bogolon-mediated capture, in contrast with regular phonon scattering.
A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).
Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A
2013-01-01
The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
High-resolution ophthalmic imaging system
Olivier, Scot S.; Carrano, Carmen J.
2007-12-04
A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.
Dual-contrast agent photon-counting computed tomography of the heart: initial experience.
Symons, Rolf; Cork, Tyler E; Lakshmanan, Manu N; Evers, Robert; Davies-Venn, Cynthia; Rice, Kelly A; Thomas, Marvin L; Liu, Chia-Ying; Kappler, Steffen; Ulzheimer, Stefan; Sandfort, Veit; Bluemke, David A; Pourmorteza, Amir
2017-08-01
To determine the feasibility of dual-contrast agent imaging of the heart using photon-counting detector (PCD) computed tomography (CT) to simultaneously assess both first-pass and late enhancement of the myocardium. An occlusion-reperfusion canine model of myocardial infarction was used. Gadolinium-based contrast was injected 10 min prior to PCD CT. Iodinated contrast was infused immediately prior to PCD CT, thus capturing late gadolinium enhancement as well as first-pass iodine enhancement. Gadolinium and iodine maps were calculated using a linear material decomposition technique and compared to single-energy (conventional) images. PCD images were compared to in vivo and ex vivo magnetic resonance imaging (MRI) and histology. For infarct versus remote myocardium, contrast-to-noise ratio (CNR) was maximal on late enhancement gadolinium maps (CNR 9.0 ± 0.8, 6.6 ± 0.7, and 0.4 ± 0.4, p < 0.001 for gadolinium maps, single-energy images, and iodine maps, respectively). For infarct versus blood pool, CNR was maximum for iodine maps (CNR 11.8 ± 1.3, 3.8 ± 1.0, and 1.3 ± 0.4, p < 0.001 for iodine maps, gadolinium maps, and single-energy images, respectively). Combined first-pass iodine and late gadolinium maps allowed quantitative separation of blood pool, scar, and remote myocardium. MRI and histology analysis confirmed accurate PCD CT delineation of scar. Simultaneous multi-contrast agent cardiac imaging is feasible with photon-counting detector CT. These initial proof-of-concept results may provide incentives to develop new k-edge contrast agents, to investigate possible interactions between multiple simultaneously administered contrast agents, and to ultimately bring them to clinical practice.
Bawankar, Pritam; Shanbhag, Nita; K., S. Smitha; Dhawan, Bodhraj; Palsule, Aratee; Kumar, Devesh; Chandel, Shailja
2017-01-01
Diabetic retinopathy (DR) is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS) of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes) were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9%) could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes). The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus. PMID:29281690
Bawankar, Pritam; Shanbhag, Nita; K, S Smitha; Dhawan, Bodhraj; Palsule, Aratee; Kumar, Devesh; Chandel, Shailja; Sood, Suneet
2017-01-01
Diabetic retinopathy (DR) is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS) of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes) were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9%) could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes). The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus.
2014-01-03
With its C2 coronagraph instrument, NASA's satellite SOHO captured a blossoming coronal mass ejection (CME) as it roared into space from the right side of the Sun (Dec. 28, 2013). SOHO also produces running difference images and movies of the Sun's corona in which the difference between one image and the next (taken about 10 minutes apart) is highlighted. This technique strongly emphasizes the changes that occurred. Here we have taken a single white light frame and shift it back and forth with a running difference image taken at the same time to illustrate the effect. Credit: NASA/GSFC/SOHO NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Innovative scheme for high-repetition-rate imaging of CN radical.
Satija, Aman; Ruesch, Morgan D; Powell, Michael S; Son, Steven F; Lucht, Robert P
2018-02-01
We have employed, to the best of our knowledge, a novel excitation scheme to perform the first high-repetition-rate planar laser-induced fluorescence (PLIF) measurements of a CN radical in combustion. The third harmonic of a Nd:YVO 4 laser at 355 nm due to its relatively large linewidth overlaps with several R branch transitions in a CN ground electronic state. Therefore, the 355 nm beam was employed to directly excite the CN transitions with good efficiency. The CN measurements were performed in premixed CH 4 -N 2 O flames with varying equivalence ratios. A detailed characterization of the high-speed CN PLIF imaging system is presented via its ability to capture statistical and dynamical information in these premixed flames. Single-shot CN PLIF images obtained over a HMX pellet undergoing self-supported deflagration are presented as an example of the imaging system being applied towards characterizing the flame structure of energetic materials.
2014-02-06
The Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite captured this stunning view of Japan’ four largest islands on February 20, 2004. The snow-covered southern arm of Hokkaido extends into the upper left corner. Honshu, Japan’s largest island, curves across the center of the image. Shikoku, right, and Kyushu, left, form the southern tip of the group. Japan is mostly mountainous, and, as the dusting of snow in this image shows, is cold in the north and more tropical in the south. A single red dot marks the location of an active fire. Credit: Jeff Schmaltz, MODIS Rapid Response Team, NASA/GSFC NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Ultrafast Imaging using Spectral Resonance Modulation
NASA Astrophysics Data System (ADS)
Huang, Eric; Ma, Qian; Liu, Zhaowei
2016-04-01
CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.
NASA Astrophysics Data System (ADS)
Serpell, Christopher J.; Rutte, Reida N.; Geraki, Kalotina; Pach, Elzbieta; Martincic, Markus; Kierkowicz, Magdalena; de Munari, Sonia; Wals, Kim; Raj, Ritu; Ballesteros, Belén; Tobias, Gerard; Anthony, Daniel C.; Davis, Benjamin G.
2016-10-01
The desire to study biology in situ has been aided by many imaging techniques. Among these, X-ray fluorescence (XRF) mapping permits observation of elemental distributions in a multichannel manner. However, XRF imaging is underused, in part, because of the difficulty in interpreting maps without an underlying cellular `blueprint' this could be supplied using contrast agents. Carbon nanotubes (CNTs) can be filled with a wide range of inorganic materials, and thus can be used as `contrast agents' if biologically absent elements are encapsulated. Here we show that sealed single-walled CNTs filled with lead, barium and even krypton can be produced, and externally decorated with peptides to provide affinity for sub-cellular targets. The agents are able to highlight specific organelles in multiplexed XRF mapping, and are, in principle, a general and versatile tool for this, and other modes of biological imaging.
Warburton, Bruce; Gormley, Andrew M
2015-01-01
Internationally, invasive vertebrate species pose a significant threat to biodiversity, agricultural production and human health. To manage these species a wide range of tools, including traps, are used. In New Zealand, brushtail possums (Trichosurus vulpecula), stoats (Mustela ermine), and ship rats (Rattus rattus) are invasive and there is an ongoing demand for cost-effective non-toxic methods for controlling these pests. Recently, traps with multiple-capture capability have been developed which, because they do not require regular operator-checking, are purported to be more cost-effective than traditional single-capture traps. However, when pest populations are being maintained at low densities (as is typical of orchestrated pest management programmes) it remains uncertain if it is more cost-effective to use fewer multiple-capture traps or more single-capture traps. To address this uncertainty, we used an individual-based spatially explicit modelling approach to determine the likely maximum animal-captures per trap, given stated pest densities and defined times traps are left between checks. In the simulation, single- or multiple-capture traps were spaced according to best practice pest-control guidelines. For possums with maintenance densities set at the lowest level (i.e. 0.5/ha), 98% of all simulated possums were captured with only a single capacity trap set at each site. When possum density was increased to moderate levels of 3/ha, having a capacity of three captures per trap caught 97% of all simulated possums. Results were similar for stoats, although only two potential captures per site were sufficient to capture 99% of simulated stoats. For rats, which were simulated at their typically higher densities, even a six-capture capacity per trap site only resulted in 80% kill. Depending on target species, prevailing density and extent of immigration, the most cost-effective strategy for pest control in New Zealand might be to deploy several single-capture traps rather than investing in fewer, but more expense, multiple-capture traps.
Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.
Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro
2016-01-01
At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
ASTER Captures New Image of Pakistan Flooding
2010-08-20
NASA Terra spacecraft captured this cloud-free image over the city of Sukkur, Pakistan, on Aug. 18, 2010. Sukkur, located in southeastern Pakistan Sindh Province, is visible as the grey, urbanized area in the lower left center of the image.
A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei
2018-01-01
Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.
Active 3D camera design for target capture on Mars orbit
NASA Astrophysics Data System (ADS)
Cottin, Pierre; Babin, François; Cantin, Daniel; Deslauriers, Adam; Sylvestre, Bruno
2010-04-01
During the ESA Mars Sample Return (MSR) mission, a sample canister launched from Mars will be autonomously captured by an orbiting satellite. We present the concept and the design of an active 3D camera supporting the orbiter navigation system during the rendezvous and capture phase. This camera aims at providing the range and bearing of a 20 cm diameter canister from 2 m to 5 km within a 20° field-of-view without moving parts (scannerless). The concept exploits the sensitivity and the gating capability of a gated intensified camera. It is supported by a pulsed source based on an array of laser diodes with adjustable amplitude and pulse duration (from nanoseconds to microseconds). The ranging capability is obtained by adequately controlling the timing between the acquisition of 2D images and the emission of the light pulses. Three modes of acquisition are identified to accommodate the different levels of ranging and bearing accuracy and the 3D data refresh rate. To come up with a single 3D image, each mode requires a different number of images to be processed. These modes can be applied to the different approach phases. The entire concept of operation of this camera is detailed with an emphasis on the extreme lighting conditions. Its uses for other space missions and terrestrial applications are also highlighted. This design is implemented in a prototype with shorter ranging capabilities for concept validation. Preliminary results obtained with this prototype are also presented. This work is financed by the Canadian Space Agency.
Lights, Camera, Action! Antimicrobial Peptide Mechanisms Imaged in Space and Time
Choi, Heejun; Rangarajan, Nambirajan; Weisshaar, James C.
2015-01-01
Deeper understanding of the bacteriostatic and bactericidal mechanisms of antimicrobial peptides (AMPs) should help in the design of new antibacterial agents. Over several decades, a variety of biochemical assays have been applied to bulk bacterial cultures. While some of these bulk assays provide time resolution on the order of 1 min, they do not capture faster mechanistic events. Nor can they provide subcellular spatial information or discern cell-to-cell heterogeneity within the bacterial population. Single-cell, time-resolved imaging assays bring a completely new spatiotemporal dimension to AMP mechanistic studies. We review recent work that provides new insights into the timing, sequence, and spatial distribution of AMP-induced effects on bacterial cells. PMID:26691950
Impact of Destructive California Wildfire Captured by NASA Spacecraft
2016-07-01
The Erskine wildfire, northeast of Bakersfield, California, is the state's largest to date in 2016. After starting on June 23, the fire has consumed 47,000 acres (19,020 hectares), destroyed more than 250 single residences, and is responsible for two fatalities. As of June 30, the fire was 70 percent contained; full containment was estimated by July 5. This image, obtained June 30 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA's Terra spacecraft, displays vegetation in red. The image covers an area of 19 by 21 miles (31 by 33 kilometers), and is located at 35.6 degrees north, 118.5 degrees west. http://photojournal.jpl.nasa.gov/catalog/PIA20741
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
NASA Technical Reports Server (NTRS)
Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)
2016-01-01
A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
A 3D photographic capsule endoscope system with full field of view
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng
2013-09-01
Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.
Near-Infrared Coloring via a Contrast-Preserving Mapping Model.
Chang-Hwan Son; Xiao-Ping Zhang
2017-11-01
Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.
Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting
Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao
2016-01-01
A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837
Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitus, B.R.; Goddard, J.S.; Jatko, W.B.
1993-06-01
The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less
Live imaging of dense-core vesicles in primary cultured hippocampal neurons.
Kwinter, David M; Silverman, Michael A; Kwinter, David; Michael, Silverman
2009-05-29
Observing and characterizing dynamic cellular processes can yield important information about cellular activity that cannot be gained from static images. Vital fluorescent probes, particularly green fluorescent protein (GFP) have revolutionized cell biology stemming from the ability to label specific intracellular compartments and cellular structures. For example, the live imaging of GFP (and its spectral variants) chimeras have allowed for a dynamic analysis of the cytoskeleton, organelle transport, and membrane dynamics in a multitude of organisms and cell types [1-3]. Although live imaging has become prevalent, this approach still poses many technical challenges, particularly in primary cultured neurons. One challenge is the expression of GFP-tagged proteins in post-mitotic neurons; the other is the ability to capture fluorescent images while minimizing phototoxicity, photobleaching, and maintaining general cell health. Here we provide a protocol that describes a lipid-based transfection method that yields a relatively low transfection rate (~0.5%), however is ideal for the imaging of fully polarized neurons. A low transfection rate is essential so that single axons and dendrites can be characterized as to their orientation to the cell body to confirm directionality of transport, i.e., anterograde v. retrograde. Our approach to imaging GFP expressing neurons relies on a standard wide-field fluorescent microscope outfitted with a CCD camera, image capture software, and a heated imaging chamber. We have imaged a wide variety of organelles or structures, for example, dense-core vesicles, mitochondria, growth cones, and actin without any special optics or excitation requirements other than a fluorescent light source. Additionally, spectrally-distinct, fluorescently labeled proteins, e.g., GFP and dsRed-tagged proteins, can be visualized near simultaneously to characterize co-transport or other coordinated cellular events. The imaging approach described here is flexible for a variety of imaging applications and can be adopted by a laboratory for relatively little cost provided a microscope is available.
NASA Spacecraft Captures Image of Brazil Flooding
2011-01-19
On Jan. 18, 2011, NASA Terra spacecraft captured this 3-D perspective image of the city of Nova Friburgo, Brazil. A week of torrential rains triggered a series of deadly mudslides and floods. More details about this image at the Photojournal.
NASA Astrophysics Data System (ADS)
Zahid, F.; Paulsson, M.; Polizzi, E.; Ghosh, A. W.; Siddiqui, L.; Datta, S.
2005-08-01
We present a transport model for molecular conduction involving an extended Hückel theoretical treatment of the molecular chemistry combined with a nonequilibrium Green's function treatment of quantum transport. The self-consistent potential is approximated by CNDO (complete neglect of differential overlap) method and the electrostatic effects of metallic leads (bias and image charges) are included through a three-dimensional finite element method. This allows us to capture spatial details of the electrostatic potential profile, including effects of charging, screening, and complicated electrode configurations employing only a single adjustable parameter to locate the Fermi energy. As this model is based on semiempirical methods it is computationally inexpensive and flexible compared to ab initio models, yet at the same time it is able to capture salient qualitative features as well as several relevant quantitative details of transport. We apply our model to investigate recent experimental data on alkane dithiol molecules obtained in a nanopore setup. We also present a comparison study of single molecule transistors and identify electronic properties that control their performance.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
Cine-servo lens technology for 4K broadcast and cinematography
NASA Astrophysics Data System (ADS)
Nurishi, Ryuji; Wakazono, Tsuyoshi; Usui, Fumiaki
2015-09-01
Central to the rapid evolution of 4K image capture technology in the past few years, deployment of large-format cameras with Super35mm Single Sensors is increasing in TV production for diverse shows such as dramas, documentaries, wildlife, and sports. While large format image capture has been the standard in the cinema world for quite some time, the recent experiences within the broadcast industry have revealed a variety of requirement differences for large format lenses compared to those of the cinema industry. A typical requirement for a broadcast lens is a considerably higher zoom ratio in order to avoid changing lenses in the middle of a live event, which is mostly not the case for traditional cinema productions. Another example is the need for compact size, light weight, and servo operability for a single camera operator shooting in a shoulder-mount ENG style. On the other hand, there are new requirements that are common to both worlds, such as smooth and seamless change in angle of view throughout the long zoom range, which potentially offers new image expression that never existed in the past. This paper will discuss the requirements from the two industries of cinema and broadcast, while at the same time introducing the new technologies and new optical design concepts applied to our latest "CINE-SERVO" lens series which presently consists of two models, CN7x17KAS-S and CN20x50IAS-H. It will further explain how Canon has realized 4K optical performance and fast servo control while simultaneously achieving compact size, light weight and high zoom ratio, by referring to patent-pending technologies such as the optical power layout, lens construction, and glass material combinations.
Fifty Years of Mars Imaging: from Mariner 4 to HiRISE
2017-11-20
This image from NASA's Mars Reconnaissance Orbiter (MRO) shows Mars' surface in detail. Mars has captured the imagination of astronomers for thousands of years, but it wasn't until the last half a century that we were able to capture images of its surface in detail. This particular site on Mars was first imaged in 1965 by the Mariner 4 spacecraft during the first successful fly-by mission to Mars. From an altitude of around 10,000 kilometers, this image (the ninth frame taken) achieved a resolution of approximately 1.25 kilometers per pixel. Since then, this location has been observed by six other visible cameras producing images with varying resolutions and sizes. This includes HiRISE (highlighted in yellow), which is the highest-resolution and has the smallest "footprint." This compilation, spanning Mariner 4 to HiRISE, shows each image at full-resolution. Beginning with Viking 1 and ending with our HiRISE image, this animation documents the historic imaging of a particular site on another world. In 1976, the Viking 1 orbiter began imaging Mars in unprecedented detail, and by 1980 had successfully mosaicked the planet at approximately 230 meters per pixel. In 1999, the Mars Orbiter Camera onboard the Mars Global Surveyor (1996) also imaged this site with its Wide Angle lens, at around 236 meters per pixel. This was followed by the Thermal Emission Imaging System on Mars Odyssey (2001), which also provided a visible camera producing the image we see here at 17 meters per pixel. Later in 2012, the High-Resolution Stereo Camera on the Mars Express orbiter (2003) captured this image of the surface at 25 meters per pixel. In 2010, the Context Camera on the Mars Reconnaissance Orbiter (2005) imaged this site at about 5 meters per pixel. Finally, in 2017, HiRISE acquired the highest resolution image of this location to date at 50 centimeters per pixel. When seen at this unprecedented scale, we can discern a crater floor strewn with small rocky deposits, boulders several meters across, and wind-blown deposits in the floors of small craters and depressions. This compilation of Mars images spanning over 50 years gives us a visual appreciation of the evolution of orbital Mars imaging over a single site. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22115
Water surface capturing by image processing
USDA-ARS?s Scientific Manuscript database
An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...
Jiang, Shaowei; Liao, Jun; Bian, Zichao; Guo, Kaikai; Zhang, Yongbing; Zheng, Guoan
2018-04-01
A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovchinnikova, Olga S; Bhandari, Deepak; Lorenz, Matthias
2014-01-01
RATIONALE: Capture of material from a laser ablation plume into a continuous flow stream of solvent provides the means for uninterrupted sampling, transport and ionization of collected material for coupling with mass spectral analysis. Reported here is the use of vertically aligned transmission geometry laser ablation in combination with a new non-contact liquid vortex capture probe coupled with electrospray ionization for spot sampling and chemical imaging with mass spectrometry. Methods: A vertically aligned continuous flow liquid vortex capture probe was positioned directly underneath a sample surface in a transmission geometry laser ablation (355 nm, 10 Hz, 7 ns pulse width)more » setup to capture into solution the ablated material. The outlet of the vortex probe was coupled to the Turbo V ion source of an AB SCIEX TripleTOF 5600+ mass spectrometer. System operation and performance metrics were tested using inked patterns and thin tissue sections. Glass slides and slides designed especially for laser capture microdissection, viz., DIRECTOR slides and PEN 1.0 (polyethylene naphthalate) membrane slides, were used as sample substrates. Results: The estimated capture efficiency of laser ablated material was 24%, which was enabled by the use of a probe with large liquid surface area (~ 2.8 mm2) and with gravity to help direct ablated material vertically down towards the probe. The swirling vortex action of the liquid surface potentially enhanced capture and dissolution of not only particulates, but also gaseous products of the laser ablation. The use of DIRECTOR slides and PEN 1.0 (polyethylene naphthalate) membrane slides as sample substrates enabled effective ablation of a wide range of sample types (basic blue 7, polypropylene glycol, insulin and cyctochrome c) without photodamage using a UV laser. Imaging resolution of about 6 m was demonstrated for stamped ink on DIRECTOR slides based on the ability to distinguish features present both in the optical and in the chemical image. This imaging resolution was 20 times better than the previous best reported results with laser ablation/liquid sample capture mass spectrometry imaging. Using thin sections of brain tissue the chemical image of a selected lipid was obtained with an estimated imaging resolution of about 50 um. Conclusions: A vertically aligned, transmission geometry laser ablation liquid vortex capture probe, electrospray ionization mass spectrometry system provides an effective means for spatially resolved spot sampling and imaging with mass spectrometry.« less
Ovchinnikova, Olga S; Bhandari, Deepak; Lorenz, Matthias; Van Berkel, Gary J
2014-08-15
Capture of material from a laser ablation plume into a continuous flow stream of solvent provides the means for uninterrupted sampling, transport and ionization of collected material for coupling with mass spectral analysis. Reported here is the use of vertically aligned transmission geometry laser ablation in combination with a new non-contact liquid vortex capture probe coupled with electrospray ionization for spot sampling and chemical imaging with mass spectrometry. A vertically aligned continuous flow liquid vortex capture probe was positioned directly underneath a sample surface in a transmission geometry laser ablation (355 nm, 10 Hz, 7 ns pulse width) set up to capture into solution the ablated material. The outlet of the vortex probe was coupled to the Turbo V™ ion source of an AB SCIEX TripleTOF 5600+ mass spectrometer. System operation and performance metrics were tested using inked patterns and thin tissue sections. Glass slides and slides designed especially for laser capture microdissection, viz., DIRECTOR(®) slides and PEN 1.0 (polyethylene naphthalate) membrane slides, were used as sample substrates. The estimated capture efficiency of laser-ablated material was 24%, which was enabled by the use of a probe with large liquid surface area (~2.8 mm(2) ) and with gravity to help direct ablated material vertically down towards the probe. The swirling vortex action of the liquid surface potentially enhanced capture and dissolution not only of particulates, but also of gaseous products of the laser ablation. The use of DIRECTOR(®) slides and PEN 1.0 (polyethylene naphthalate) membrane slides as sample substrates enabled effective ablation of a wide range of sample types (basic blue 7, polypropylene glycol, insulin and cyctochrome c) without photodamage using a UV laser. Imaging resolution of about 6 µm was demonstrated for stamped ink on DIRECTOR(®) slides based on the ability to distinguish features present both in the optical and in the chemical image. This imaging resolution was 20 times better than the previous best reported results with laser ablation/liquid sample capture mass spectrometry imaging. Using thin sections of brain tissue the chemical image of a selected lipid was obtained with an estimated imaging resolution of about 50 µm. A vertically aligned, transmission geometry laser ablation liquid vortex capture probe, electrospray ionization mass spectrometry system provides an effective means for spatially resolved spot sampling and imaging with mass spectrometry. Published in 2014. This article is a U.S. Government work and is in the public domain in the USA.
An efficient intensity-based ready-to-use X-ray image stitcher.
Wang, Junchen; Zhang, Xiaohui; Sun, Zhen; Yuan, Fuzhen
2018-06-14
The limited field of view of the X-ray image intensifier makes it difficult to cover a large target area with a single X-ray image. X-ray image stitching techniques have been proposed to produce a panoramic X-ray image. This paper presents an efficient intensity-based X-ray image stitcher, which does not rely on accurate C-arm motion control or auxiliary devices and hence is ready to use in clinic. The stitcher consumes sequentially captured X-ray images with overlap areas and automatically produces a panoramic image. The gradient information for optimization of image alignment is obtained using a back-propagation scheme so that it is convenient to adopt various image warping models. The proposed stitcher has the following advantages over existing methods: (1) no additional hardware modification or auxiliary markers are needed; (2) it is robust against feature-based approaches; (3) arbitrary warping models and shapes of the region of interest are supported; (4) seamless stitching is achieved using multi-band blending. Experiments have been performed to confirm the effectiveness of the proposed method. The proposed X-ray image stitcher is efficient, accurate and ready to use in clinic. Copyright © 2018 John Wiley & Sons, Ltd.
A large-scale solar dynamics observatory image dataset for computer vision applications.
Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A
2017-01-01
The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.
AUGUSTO'S Sundial: Image-Based Modeling for Reverse Engeneering Purposes
NASA Astrophysics Data System (ADS)
Baiocchi, V.; Barbarella, M.; Del Pizzo, S.; Giannone, F.; Troisi, S.; Piccaro, C.; Marcantonio, D.
2017-02-01
A photogrammetric survey of a unique archaeological site is reported in this paper. The survey was performed using both a panoramic image-based solution and by classical procedure. The panoramic image-based solution was carried out employing a commercial solution: the Trimble V10 Imaging Rover (IR). Such instrument is an integrated cameras system that captures 360 degrees digital panoramas, composed of 12 images, with a single push. The direct comparison of the point clouds obtained with traditional photogrammetric procedure and V10 stations, using the same GCP coordinates has been carried out in Cloud Compare, open source software that can provide the comparison between two point clouds supplied by all the main statistical data. The site is a portion of the dial plate of the "Horologium Augusti" inaugurated in 9 B.C.E. in the area of Campo Marzio and still present intact in the same position, in a cellar of a building in Rome, around 7 meter below the present ground level.
A deep learning framework for the automated inspection of complex dual-energy x-ray cargo imagery
NASA Astrophysics Data System (ADS)
Rogers, Thomas W.; Jaccard, Nicolas; Griffin, Lewis D.
2017-05-01
Previously, we investigated the use of Convolutional Neural Networks (CNNs) to detect so-called Small Metallic Threats (SMTs) hidden amongst legitimate goods inside a cargo container. We trained a CNN from scratch on data produced by a Threat Image Projection (TIP) framework that generates images with realistic variation to robustify performance. The system achieved 90% detection of containers that contained a single SMT, while raising 6% false positives on benign containers. The best CNN architecture used the raw high energy image (single-energy) and its logarithm as input channels. Use of the logarithm improved performance, thus echoing studies on human operator performance. However, it is an unexpected result with CNNs. In this work, we (i) investigate methods to exploit material information captured in dual-energy images, and (ii) introduce a new CNN training scheme that generates `spot-the-difference' benign and threat pairs on-the-fly. To the best of our knowledge, this is the first time that CNNs have been applied directly to raw dual-energy X-ray imagery, in any field. To exploit dual-energy, we experiment with adapting several physics-derived approaches to material discrimination from the cargo literature, and introduce three novel variants. We hypothesise that CNNs can implicitly learn about the material characteristics of objects from the raw dual-energy images, and use this to suppress false positives. The best performing method is able to detect 95% of containers containing a single SMT, while raising 0.4% false positives on benign containers. This is a step change improvement in performance over our prior work
Colomb, Tristan; Dürr, Florian; Cuche, Etienne; Marquet, Pierre; Limberger, Hans G; Salathé, René-Paul; Depeursinge, Christian
2005-07-20
We present a digital holographic microscope that permits one to image polarization state. This technique results from the coupling of digital holographic microscopy and polarization digital holography. The interference between two orthogonally polarized reference waves and the wave transmitted by a microscopic sample, magnified by a microscope objective, is recorded on a CCD camera. The off-axis geometry permits one to reconstruct separately from this single hologram two wavefronts that are used to image the object-wave Jones vector. We applied this technique to image the birefringence of a bent fiber. To evaluate the precision of the phase-difference measurement, the birefringence induced by internal stress in an optical fiber is measured and compared to the birefringence profile captured by a standard method, which had been developed to obtain high-resolution birefringence profiles of optical fibers.
NASA Astrophysics Data System (ADS)
Speed, C. M.; Swartz, J. M.; Gulick, S. P. S.; Goff, J.
2017-12-01
The Trinity River paleovalley is an offshore stratigraphic structure located on the inner continental shelf of the Gulf of Mexico offshore Galveston, Texas. Its formation is linked to the paleo-Trinity system as it existed across the continental shelf during the last glacial period. Newly acquired high-resolution geophysical data have imaged more complexity to the valley morphology and shelf stratigraphy than was previously captured. Significantly, the paleo-Trinity River valley appears to change in the degree of confinement and relief relative to the surrounding strata. Proximal to the modern shoreline, the interpreted time-transgressive erosive surface formed by the paleo-river system is broad and rugose with no single valley, but just 5 km farther offshore the system appears to become confined to a 10 km wide valley structure before again becoming unconfined once again 30 km offshore. Fluvial stratigraphy in this region has a similar degree of complexity in morphology and preservation. A dense geophysical survey of several hundred km is planned for Fall 2017, which will provide unprecedented imaging of the paleovalley morphology and associated stratigraphy. Our analysis leverages robust chirp processing techniques that allow for imaging of strata on the decimeter scale. We will integrate our geophysical results with a wide array of both newly collected and previously published sediment cores. This approach will allow us to address several key questions regarding incised valley formation and preservation on glacial-interglacial timescales including: to what extent do paleo-rivers remain confined within a single broad valley structure, what is the fluvial systems response to transgression, and what stratigraphy is created and preserved at the transition from fluvial to estuarine environments? Our work illustrates that traditional models of incised valley formation and subsequent infilling potentially fail to capture the full breadth of dynamics of past river systems.
Slówko, Witold; Wiatrowski, Artur; Krysztof, Michał
2018-01-01
The paper considers some major problems of adapting the multi-detector method for three-dimensional (3D) imaging of wet bio-medical samples in Variable Pressure/Environmental Scanning Electron Microscope (VP/ESEM). The described method pertains to "single-view techniques", which to create the 3D surface model utilise a sequence of 2D SEM images captured from a single view point (along the electron beam axis) but illuminated from four directions. The basis of the method and requirements resulting from them are given for the detector systems of secondary (SE) and backscattered electrons (BSE), as well as designs of the systems which could work in variable conditions. The problems of SE detection with application of the Pressure Limiting Aperture (PLA) as the signal collector are discussed with respect to secondary electron backscattering by a gaseous environment. However, the authors' attention is turned mainly to the directional BSE detection, realized in two ways. The high take off angle BSE were captured through PLA with use of the quadruple semiconductor detector placed inside the intermediate chamber, while BSE starting at lower angles were detected by the four-folded ionization device working in the sample chamber environment. The latter relied on a conversion of highly energetic BSE into low energetic SE generated on walls and a gaseous environment of the deep discharge gap oriented along the BSE velocity direction. The converted BSE signal was amplified in an ionising avalanche developed in the electric field arranged transversally to the gap. The detector system operation is illustrated with numerous computer simulations and examples of experiments and 3D images. The latter were conducted in a JSM 840 microscope with its combined detector-vacuum equipment which could extend capabilities of this high vacuum instrument toward elevated pressures (over 1kPa) and environmental conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reducing flicker due to ambient illumination in camera captured images
NASA Astrophysics Data System (ADS)
Kim, Minwoong; Bengtson, Kurt; Li, Lisa; Allebach, Jan P.
2013-02-01
The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.
NASA Technical Reports Server (NTRS)
Zhao, Minhua; Ming, Bin; Kim, Jae-Woo; Gibbons, Luke J.; Gu, Xiaohong; Nguyen, Tinh; Park, Cheol; Lillehei, Peter T.; Villarrubia, J. S.; Vladar, Andras E.;
2015-01-01
Despite many studies of subsurface imaging of carbon nanotube (CNT)-polymer composites via scanning electron microscopy (SEM), significant controversy exists concerning the imaging depth and contrast mechanisms. We studied CNT-polyimide composites and, by threedimensional reconstructions of captured stereo-pair images, determined that the maximum SEM imaging depth was typically hundreds of nanometers. The contrast mechanisms were investigated over a broad range of beam accelerating voltages from 0.3 to 30 kV, and ascribed to modulation by embedded CNTs of the effective secondary electron (SE) emission yield at the polymer surface. This modulation of the SE yield is due to non-uniform surface potential distribution resulting from current flows due to leakage and electron beam induced current. The importance of an external electric field on SEM subsurface imaging was also demonstrated. The insights gained from this study can be generally applied to SEM nondestructive subsurface imaging of conducting nanostructures embedded in dielectric matrices such as graphene-polymer composites, silicon-based single electron transistors, high resolution SEM overlay metrology or e-beam lithography, and have significant implications in nanotechnology.
Chen, Hui; Palmer, N; Dayton, M; Carpenter, A; Schneider, M B; Bell, P M; Bradley, D K; Claus, L D; Fang, L; Hilsabeck, T; Hohenberger, M; Jones, O S; Kilkenny, J D; Kimmel, M W; Robertson, G; Rochau, G; Sanchez, M O; Stahoviak, J W; Trotter, D C; Porter, J L
2016-11-01
A novel x-ray imager, which takes time-resolved gated images along a single line-of-sight, has been successfully implemented at the National Ignition Facility (NIF). This Gated Laser Entrance Hole diagnostic, G-LEH, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories to upgrade the existing Static X-ray Imager diagnostic at NIF. The new diagnostic is capable of capturing two laser-entrance-hole images per shot on its 1024 × 448 pixels photo-detector array, with integration times as short as 1.6 ns per frame. Since its implementation on NIF, the G-LEH diagnostic has successfully acquired images from various experimental campaigns, providing critical new information for understanding the hohlraum performance in inertial confinement fusion (ICF) experiments, such as the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall.
Expansion of the visual angle of a car rear-view image via an image mosaic algorithm
NASA Astrophysics Data System (ADS)
Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng
2015-05-01
The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.
Dynamic quantitative analysis of adherent cell cultures by means of lens-free video microscopy
NASA Astrophysics Data System (ADS)
Allier, C.; Vincent, R.; Navarro, F.; Menneteau, M.; Ghenim, L.; Gidrol, X.; Bordy, T.; Hervé, L.; Cioni, O.; Bardin, S.; Bornens, M.; Usson, Y.; Morales, S.
2018-02-01
We present our implementation of lens-free video microscopy setup for the monitoring of adherent cell cultures. We use a multi-wavelength LED illumination together with a dedicated holographic reconstruction algorithm that allows for an efficient removal of twin images from the reconstructed phase image for densities up to those of confluent cell cultures (>500 cells/mm2). We thereby demonstrate that lens-free video microscopy, with a large field of view ( 30 mm2) can enable us to capture the images of thousands of cells simultaneously and directly inside the incubator. It is then possible to trace and quantify single cells along several cell cycles. We thus prove that lens-free microscopy is a quantitative phase imaging technique enabling estimation of several metrics at the single cell level as a function of time, for example the area, dry mass, maximum thickness, major axis length and aspect ratio of each cell. Combined with cell tracking, it is then possible to extract important parameters such as the initial cell dry mass (just after cell division), the final cell dry mass (just before cell division), the average cell growth rate, and the cell cycle duration. As an example, we discuss the monitoring of a HeLa cell cultures which provided us with a data-set featuring more than 10 000 cell cycle tracks and more than 2x106 cell morphological measurements in a single time-lapse.
Ianuş, Andrada; Shemesh, Noam
2018-04-01
Diffusion MRI is confounded by the need to acquire at least two images separated by a repetition time, thereby thwarting the detection of rapid dynamic microstructural changes. The issue is exacerbated when diffusivity variations are accompanied by rapid changes in T 2 . The purpose of the present study is to accelerate diffusion MRI acquisitions such that both reference and diffusion-weighted images necessary for quantitative diffusivity mapping are acquired in a single-shot experiment. A general methodology termed incomplete initial nutation diffusion imaging (INDI), capturing two diffusion contrasts in a single shot, is presented. This methodology creates a longitudinal magnetization reservoir that facilitates the successive acquisition of two images separated by only a few milliseconds. The theory behind INDI is presented, followed by proof-of-concept studies in water phantom, ex vivo, and in vivo experiments at 16.4 and 9.4 T. Mean diffusivities extracted from INDI were comparable with diffusion tensor imaging and the two-shot isotropic diffusion encoding in the water phantom. In ex vivo mouse brain tissues, as well as in the in vivo mouse brain, mean diffusivities extracted from conventional isotropic diffusion encoding and INDI were in excellent agreement. Simulations for signal-to-noise considerations identified the regimes in which INDI is most beneficial. The INDI method accelerates diffusion MRI acquisition to single-shot mode, which can be of great importance for mapping dynamic microstructural properties in vivo without T 2 bias. Magn Reson Med 79:2198-2204, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Knowledge of healthcare professionals about rights of patient’s images
Caires, Bianca Rodrigues; Lopes, Maria Carolina Barbosa Teixeira; Okuno, Meiry Fernanda Pinto; Vancini-Campanharo, Cássia Regina; Batista, Ruth Ester Assayag
2015-01-01
Objective To assess knowledge of healthcare professionals about capture and reproduction of images of patients in a hospital setting. Methods A cross-sectional and observational study among 360 healthcare professionals (nursing staff, physical therapists, and physicians), working at a teaching hospital in the city of São Paulo (SP). A questionnaire with sociodemographic information was distributed and data were correlated to capture and reproduction of images at hospitals. Results Of the 360 respondents, 142 had captured images of patients in the last year, and 312 reported seeing other professionals taking photographs of patients. Of the participants who captured images, 61 said they used them for studies and presentation of clinical cases, and 168 professionals reported not knowing of any legislation in the Brazilian Penal Code regarding collection and use of images. Conclusion There is a gap in the training of healthcare professionals regarding the use of patient´s images. It is necessary to include subjects that address this theme in the syllabus of undergraduate courses, and the healthcare organizations should regulate this issue. PMID:26267838
NASA Astrophysics Data System (ADS)
Prabhakaran, SP.; Ramesh Babu, R.; Sukumar, M.; Bhagavannarayana, G.; Ramamurthi, K.
2014-03-01
Growth of bulk single crystal of 4-Aminobenzophenone (4-ABP) from the vertical dynamic gradient freeze (VDGF) setup designed with eight zone furnace was investigated. The experimental parameters for the growth of 4-ABP single crystal with respect to the design of VDGF setup are discussed. The eight zones were used to generate multiple temperature gradients over the furnace, and video imaging system helped to capture the real time growth and solid-liquid interface. 4-ABP single crystal with the size of 18 mm diameter and 40 mm length was grown from this investigation. Structural and optical quality of grown crystal was examined by high resolution X-ray diffraction and UV-visible spectral analysis, respectively and the blue emission was also confirmed from the photoluminescence spectrum. Microhardness number of the crystal was estimated at different loads using Vicker's microhardness tester. The size and quality of single crystal grown from the present investigation are compared with the vertical Bridgman grown 4-ABP.
Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation
NASA Astrophysics Data System (ADS)
Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin
2018-04-01
Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing
2014-06-01
price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Russian Character Recognition using Self-Organizing Map
NASA Astrophysics Data System (ADS)
Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.
2017-01-01
The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%
Multi-exposure high dynamic range image synthesis with camera shake correction
NASA Astrophysics Data System (ADS)
Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie
2017-10-01
Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.
Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios
2017-02-01
Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.
Aerial image based die-to-model inspections of advanced technology masks
NASA Astrophysics Data System (ADS)
Kim, Jun; Lei, Wei-Guo; McCall, Joan; Zaatri, Suheil; Penn, Michael; Nagpal, Rajesh; Faivishevsky, Lev; Ben-Yishai, Michael; Danino, Udy; Tam, Aviram; Dassa, Oded; Balasubramanian, Vivek; Shah, Tejas H.; Wagner, Mark; Mangan, Shmoolik
2009-10-01
Die-to-Model (D2M) inspection is an innovative approach to running inspection based on a mask design layout data. The D2M concept takes inspection from the traditional domain of mask pattern to the preferred domain of the wafer aerial image. To achieve this, D2M transforms the mask layout database into a resist plane aerial image, which in turn is compared to the aerial image of the mask, captured by the inspection optics. D2M detection algorithms work similarly to an Aerial D2D (die-to-die) inspection, but instead of comparing a die to another die it is compared to the aerial image model. D2M is used whenever D2D inspection is not practical (e.g., single die) or when a validation of mask conformity to design is needed, i.e., for printed pattern fidelity. D2M is of particular importance for inspection of logic single die masks, where no simplifying assumption of pattern periodicity may be done. The application can tailor the sensitivity to meet the needs at different locations, such as device area, scribe lines and periphery. In this paper we present first test results of the D2M mask inspection application at a mask shop. We describe the methodology of using D2M, and review the practical aspects of the D2M mask inspection.
Jandee, Kasemsak; Kaewkungwal, Jaranit; Khamsiriwatchara, Amnat; Lawpoolsri, Saranath; Wongwit, Waranya; Wansatid, Peerawat
2015-07-20
Entering data onto paper-based forms, then digitizing them, is a traditional data-management method that might result in poor data quality, especially when the secondary data are incomplete, illegible, or missing. Transcription errors from source documents to case report forms (CRFs) are common, and subsequently the errors pass from the CRFs to the electronic database. This study aimed to demonstrate the usefulness and to evaluate the effectiveness of mobile phone camera applications in capturing health-related data, aiming for data quality and completeness as compared to current routine practices exercised by government officials. In this study, the concept of "data entry via phone image capture" (DEPIC) was introduced and developed to capture data directly from source documents. This case study was based on immunization history data recorded in a mother and child health (MCH) logbook. The MCH logbooks (kept by parents) were updated whenever parents brought their children to health care facilities for immunization. Traditionally, health providers are supposed to key in duplicate information of the immunization history of each child; both on the MCH logbook, which is returned to the parents, and on the individual immunization history card, which is kept at the health care unit to be subsequently entered into the electronic health care information system (HCIS). In this study, DEPIC utilized the photographic functionality of mobile phones to capture images of all immunization-history records on logbook pages and to transcribe these records directly into the database using a data-entry screen corresponding to logbook data records. DEPIC data were then compared with HCIS data-points for quality, completeness, and consistency. As a proof-of-concept, DEPIC captured immunization history records of 363 ethnic children living in remote areas from their MCH logbooks. Comparison of the 2 databases, DEPIC versus HCIS, revealed differences in the percentage of completeness and consistency of immunization history records. Comparing the records of each logbook in the DEPIC and HCIS databases, 17.3% (63/363) of children had complete immunization history records in the DEPIC database, but no complete records were reported in the HCIS database. Regarding the individual's actual vaccination dates, comparison of records taken from MCH logbook and those in the HCIS found that 24.2% (88/363) of the children's records were absolutely inconsistent. In addition, statistics derived from the DEPIC records showed a higher immunization coverage and much more compliance to immunization schedule by age group when compared to records derived from the HCIS database. DEPIC, or the concept of collecting data via image capture directly from their primary sources, has proven to be a useful data collection method in terms of completeness and consistency. In this study, DEPIC was implemented in data collection of a single survey. The DEPIC concept, however, can be easily applied in other types of survey research, for example, collecting data on changes or trends based on image evidence over time. With its image evidence and audit trail features, DEPIC has the potential for being used even in clinical studies since it could generate improved data integrity and more reliable statistics for use in both health care and research settings.
Fu, Yu; Pedrini, Giancarlo
2014-01-01
In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503
3D digital image correlation using a single 3CCD colour camera and dichroic filter
NASA Astrophysics Data System (ADS)
Zhong, F. Q.; Shao, X. X.; Quan, C.
2018-04-01
In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Dual-camera design for coded aperture snapshot spectral imaging.
Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng
2015-02-01
Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.
Note: Simple hysteresis parameter inspector for camera module with liquid lens
NASA Astrophysics Data System (ADS)
Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung
2010-05-01
A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.
Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2018-05-01
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
Time of flight imaging through scattering environments (Conference Presentation)
NASA Astrophysics Data System (ADS)
Le, Toan H.; Breitbach, Eric C.; Jackson, Jonathan A.; Velten, Andreas
2017-02-01
Light scattering is a primary obstacle to imaging in many environments. On small scales in biomedical microscopy and diffuse tomography scenarios scattering is caused by tissue. On larger scales scattering from dust and fog provide challenges to vision systems for self driving cars and naval remote imaging systems. We are developing scale models for scattering environments and investigation methods for improved imaging particularly using time of flight transient information. With the emergence of Single Photon Avalanche Diode detectors and fast semiconductor lasers, illumination and capture on picosecond timescales are becoming possible in inexpensive, compact, and robust devices. This opens up opportunities for new computational imaging techniques that make use of photon time of flight. Time of flight or range information is used in remote imaging scenarios in gated viewing and in biomedical imaging in time resolved diffuse tomography. In addition spatial filtering is popular in biomedical scenarios with structured illumination and confocal microscopy. We are presenting a combination analytical, computational, and experimental models that allow us develop and test imaging methods across scattering scenarios and scales. This framework will be used for proof of concept experiments to evaluate new computational imaging methods.
Optimization of compressive 4D-spatio-spectral snapshot imaging
NASA Astrophysics Data System (ADS)
Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing
2017-10-01
In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
Douglas, Erik S; Hsiao, Sonny C; Onoe, Hiroaki; Bertozzi, Carolyn R; Francis, Matthew B; Mathies, Richard A
2009-07-21
A microdevice is developed for DNA-barcode directed capture of single cells on an array of pH-sensitive microelectrodes for metabolic analysis. Cells are modified with membrane-bound single-stranded DNA, and specific single-cell capture is directed by the complementary strand bound in the sensor area of the iridium oxide pH microelectrodes within a microfluidic channel. This bifunctional microelectrode array is demonstrated for the pH monitoring and differentiation of primary T cells and Jurkat T lymphoma cells. Single Jurkat cells exhibited an extracellular acidification rate of 11 milli-pH min(-1), while primary T cells exhibited only 2 milli-pH min(-1). This system can be used to capture non-adherent cells specifically and to discriminate between visually similar healthy and cancerous cells in a heterogeneous ensemble based on their altered metabolic properties.
Douglas, Erik S.; Hsiao, Sonny C.; Onoe, Hiroaki; Bertozzi, Carolyn R.; Francis, Matthew B.; Mathies, Richard A.
2010-01-01
A microdevice is developed for DNA-barcode directed capture of single cells on an array of pH-sensitive microelectrodes for metabolic analysis. Cells are modified with membrane-bound single-stranded DNA, and specific single-cell capture is directed by the complementary strand bound in the sensor area of the iridium oxide pH microelectrodes within a microfluidic channel. This bifunctional microelectrode array is demonstrated for the pH monitoring and differentiation of primary T cells and Jurkat T lymphoma cells. Single Jurkat cells exhibited an extracellular acidification rate of 11 milli-pH min−1, while primary T cells exhibited only 2 milli-pH min−1. This system can be used to capture non-adherent cells specifically and to discriminate between visually similar healthy and cancerous cells in a heterogeneous ensemble based on their altered metabolic properties. PMID:19568668
A new omni-directional multi-camera system for high resolution surveillance
NASA Astrophysics Data System (ADS)
Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2014-05-01
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.
Large beam deflection using cascaded prism array
NASA Astrophysics Data System (ADS)
Wang, Wei-Chih; Tsui, Chi-Leung
2012-04-01
Endoscopes have been utilize in the medical field to observe the internals of the human body to assist the diagnosis of diseases, such as breathing disorders, internal bleeding, stomach ulcers, and urinary tract infections. Endoscopy is also utilized in the procedure of biopsy for the diagnosis of cancer. Conventional endoscopes suffer from the compromise between overall size and image quality due to the required size of the sensor for acceptable image quality. To overcome the size constraint while maintaining the capture image quality, we propose an electro-optic beam steering device based on thermal-plastic polymer, which has a small foot-print (~5mmx5mm), and can be easily fabricated using conventional hot-embossing and micro-fabrication techniques. The proposed device can be implemented as an imaging device inside endoscopes to allow reduction in the overall system size. In our previous work, a single prism design has been used to amplify the deflection generated by the index change of the thermal-plastic polymer when a voltage is applied; it yields a result of 5.6° deflection. To further amplify the deflection, a new design utilizing a cascading three-prism array has been implemented and a deflection angle to 29.2° is observed. The new design amplifies the beam deflection, while keeping the advantage of simple fabrication made possible by thermal-plastic polymer. Also, a photo-resist based collimator lens array has been added to reduce and provide collimation of the beam for high quality imaging purposes. The collimator is able to collimate the exiting beam at 4 μm diameter for up to 25mm, which potentially allows high resolution image capturing.
13-fold resolution gain through turbid layer via translated unknown speckle illumination
Guo, Kaikai; Zhang, Zibang; Jiang, Shaowei; Liao, Jun; Zhong, Jingang; Eldar, Yonina C.; Zheng, Guoan
2017-01-01
Fluorescence imaging through a turbid layer holds great promise for various biophotonics applications. Conventional wavefront shaping techniques aim to create and scan a focus spot through the turbid layer. Finding the correct input wavefront without direct access to the target plane remains a critical challenge. In this paper, we explore a new strategy for imaging through turbid layer with a large field of view. In our setup, a fluorescence sample is sandwiched between two turbid layers. Instead of generating one focus spot via wavefront shaping, we use an unshaped beam to illuminate the turbid layer and generate an unknown speckle pattern at the target plane over a wide field of view. By tilting the input wavefront, we raster scan the unknown speckle pattern via the memory effect and capture the corresponding low-resolution fluorescence images through the turbid layer. Different from the wavefront-shaping-based single-spot scanning, the proposed approach employs many spots (i.e., speckles) in parallel for extending the field of view. Based on all captured images, we jointly recover the fluorescence object, the unknown optical transfer function of the turbid layer, the translated step size, and the unknown speckle pattern. Without direct access to the object plane or knowledge of the turbid layer, we demonstrate a 13-fold resolution gain through the turbid layer using the reported strategy. We also demonstrate the use of this technique to improve the resolution of a low numerical aperture objective lens allowing to obtain both large field of view and high resolution at the same time. The reported method provides insight for developing new fluorescence imaging platforms and may find applications in deep-tissue imaging. PMID:29359102
JunoCam Images of Jupiter: A Juno Citizen Science Experiment
NASA Astrophysics Data System (ADS)
Hansen, Candice; Ravine, Michael; Bolton, Scott; Caplinger, Mike; Eichstadt, Gerald; Jensen, Elsa; Momary, Thomas W.; Orton, Glenn S.; Rogers, John
2017-10-01
The Juno mission to Jupiter carries a visible imager on its payload primarily for outreach. The vision of JunoCam’s outreach plan was for the public to participate in, not just observe, a science investigation. Four webpage components were developed for uploading and downloading comments and images, following the steps a traditional imaging team would do: Planning, Discussion, Voting, and Processing, hosted at https://missionjuno.swri.edu/junocam. Lightly processed and raw JunoCam data are posted. JunoCam images through broadband red, green and blue filters and a narrowband methane filter centered at 889 nm mounted directly on the detector. JunoCam is a push-frame imager with a 58 deg wide field of view covering a 1600 pixel width, and builds the second dimension of the image as the spacecraft rotates. This design enables capture of the entire pole of Jupiter in a single image at low emission angle when Juno is ~1 hour from perijove (closest approach). At perijove the wide field of view images are high-resolution while still capturing entire storms, e.g. the Great Red Spot. The public is invited to download JunoCam images, process them, and then upload their products. Over 2000 images have been uploaded to the JunoCam public image gallery. Contributions range from scientific quality to artful whimsy. Artistic works are inspired by Van Gogh and Monet. Works of whimsy include how Jupiter might look through the viewport of the Millennium Falcon, or to an angel perched on a lookout, or through a kaleidoscope. Citizen scientists have also engaged in serious quantitative analysis of the images, mapping images to storms and disruptions of the belts and zones that have been tracked from the earth. They are developing a phase function for Jupiter that allows the images to be flattened from the subsolar point to the terminator, and studying high hazes. Citizen scientists are also developing time-lapse movies, measuring wind flow, tracking circulation patterns in the circumpolar cyclones, and looking for lightning flashes. This effort has engaged the public, with a range of personal interests and considerable artistic and analytic talents. In return, we count our diverse public as partners in this endeavor.
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Wu, Yin-Yi; Dung, Lan-Rong; Wu, Hsien-Ming; Weng, Ping-Kuo; Huang, Ker-Jer; Chiu, Luan-Jiau
2012-05-01
This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.
Three-dimensional particle tracking via tunable color-encoded multiplexing.
Duocastella, Martí; Theriault, Christian; Arnold, Craig B
2016-03-01
We present a novel 3D tracking approach capable of locating single particles with nanometric precision over wide axial ranges. Our method uses a fast acousto-optic liquid lens implemented in a bright field microscope to multiplex light based on color into different and selectable focal planes. By separating the red, green, and blue channels from an image captured with a color camera, information from up to three focal planes can be retrieved. Multiplane information from the particle diffraction rings enables precisely locating and tracking individual objects up to an axial range about 5 times larger than conventional single-plane approaches. We apply our method to the 3D visualization of the well-known coffee-stain phenomenon in evaporating water droplets.
Improved wheal detection from skin prick test images
NASA Astrophysics Data System (ADS)
Bulan, Orhan
2014-03-01
Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
Improved integral images compression based on multi-view extraction
NASA Astrophysics Data System (ADS)
Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric
2016-09-01
Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.
A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.
Pagoulatos, N; Haynor, D R; Kim, Y
2001-09-01
We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.
Results From the New NIF Gated LEH imager
NASA Astrophysics Data System (ADS)
Chen, Hui; Amendt, P.; Barrios, M.; Bradley, D.; Casey, D.; Hinkel, D.; Berzak Hopkins, L.; Kilkenny, J.; Kritcher, A.; Landen, O.; Jones, O.; Ma, T.; Milovich, J.; Michel, P.; Moody, J.; Ralph, J.; Pak, A.; Palmer, N.; Schneider, M.
2016-10-01
A novel ns-gated Laser Entrance Hole (G-LEH) diagnostic has been successfully implemented at the National Ignition Facility (NIF). This diagnostic has successfully acquired images from various experimental campaigns, providing critical information for inertial confinement fusion experiments. The G-LEH diagnostic which takes time-resolved gated images along a single line-of-sight, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories into the existing Static X-ray Imager diagnostic at NIF. It is capable of capturing two laser-entrance-hole images per shot on its 1024x448 pixel photo-detector array, with integration times as short as 2 ns per frame. The results that will be presented include the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall. This work was performed under the auspices of the U.S. Department of Energy by LLNS, LLC, under Contract No. DE-AC52- 07NA27344.
Phase object retrieval through scattering medium
NASA Astrophysics Data System (ADS)
Zhao, Ming; Zhao, Meijing; Wu, Houde; Xu, Wenhai
2018-05-01
Optical imaging through a scattering medium has been an interesting and important research topic, especially in the field of biomedical imaging. However, it is still a challenging task due to strong scattering. This paper proposes to recover the phase object behind the scattering medium from one single-shot speckle intensity image using calibrated transmission matrices (TMs). We construct the forward model as a non-linear mapping, since the intensity image loses the phase information, and then a generalized phase retrieval algorithm is employed to recover the hidden object. Moreover, we show that a phase object can be reconstructed with a small portion of the speckle image captured by the camera. The simulation is performed to demonstrate our scheme and test its performance. Finally, a real experiment is set up, we measure the TMs from the scattering medium, and then use it to reconstruct the hidden object. We show that a phase object of size 32 × 32 is retrieved from 150 × 150 speckle grains, which is only 1/50 of the speckles area. We believe our proposed method can benefit the community of imaging through the scattering medium.
Precision platform for convex lens-induced confinement microscopy
NASA Astrophysics Data System (ADS)
Berard, Daniel; McFaul, Christopher M. J.; Leith, Jason S.; Arsenault, Adriel K. J.; Michaud, François; Leslie, Sabrina R.
2013-10-01
We present the conception, fabrication, and demonstration of a versatile, computer-controlled microscopy device which transforms a standard inverted fluorescence microscope into a precision single-molecule imaging station. The device uses the principle of convex lens-induced confinement [S. R. Leslie, A. P. Fields, and A. E. Cohen, Anal. Chem. 82, 6224 (2010)], which employs a tunable imaging chamber to enhance background rejection and extend diffusion-limited observation periods. Using nanopositioning stages, this device achieves repeatable and dynamic control over the geometry of the sample chamber on scales as small as the size of individual molecules, enabling regulation of their configurations and dynamics. Using microfluidics, this device enables serial insertion as well as sample recovery, facilitating temporally controlled, high-throughput measurements of multiple reagents. We report on the simulation and experimental characterization of this tunable chamber geometry, and its influence upon the diffusion and conformations of DNA molecules over extended observation periods. This new microscopy platform has the potential to capture, probe, and influence the configurations of single molecules, with dramatically improved imaging conditions in comparison to existing technologies. These capabilities are of immediate interest to a wide range of research and industry sectors in biotechnology, biophysics, materials, and chemistry.
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
Systems and Methods for Imaging of Falling Objects
NASA Technical Reports Server (NTRS)
Fallgatter, Cale (Inventor); Garrett, Tim (Inventor)
2014-01-01
Imaging of falling objects is described. Multiple images of a falling object can be captured substantially simultaneously using multiple cameras located at multiple angles around the falling object. An epipolar geometry of the captured images can be determined. The images can be rectified to parallelize epipolar lines of the epipolar geometry. Correspondence points between the images can be identified. At least a portion of the falling object can be digitally reconstructed using the identified correspondence points to create a digital reconstruction.
Design and implementation of a contactless multiple hand feature acquisition system
NASA Astrophysics Data System (ADS)
Zhao, Qiushi; Bu, Wei; Wu, Xiangqian; Zhang, David
2012-06-01
In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.
NASA Technical Reports Server (NTRS)
2007-01-01
For several weeks in May and early June, daily satellite images of the North Atlantic Ocean west of Ireland have captured partial glimpses of luxuriant blooms of microscopic marine plants between patches of clouds. On June 4, 2007, the skies over the ocean cleared, displaying the sea's spring bloom in brilliant color. A bright blue bloom stretches north from the Mouth of the River Shannon and tapers off like a plume of blue smoke north of Clare Island. (In the large image, a second bloom is visible to the north, wrapping around County Donegal, on the island's northwestern tip.) The image was captured by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite. Cold, nutrient-stocked water often wells up to the surface from the deeper ocean along coastal shelves and at the edges of ocean currents. When it does, it delivers a boost of nutrients that fuel large blooms of single-celled plants collectively known as phytoplankton. The plants are the foundation of the marine food web, and their proliferation in this area of the North Atlantic explains why the waters of western Ireland support myriad fisheries and populations of large mammals like seals, whales, and dolphins. Like plants on land, phytoplankton make their food through photosynthesis, harnessing sunlight for energy using chlorophyll and other light-capturing pigments. The pigments change the way light reflects off the surface water, appearing as colorful swirls of turquoise and green against the darker blue of the ocean. Though individually tiny, collectively these plants play a big role in Earth's carbon and climate cycles; worldwide, they remove about as much carbon dioxide from the atmosphere during photosynthesis as land plants do. Satellites are the only way to map the occurrence of phytoplankton blooms across the global oceans on a regular basis. That kind of information is important not only to scientists who model carbon and climate, but also to biologists and fisheries managers who monitor the health of marine natural resources like coral reefs and fish populations.
Regulation of cell arrangement using a novel composite micropattern.
Liu, Xiaoyi; Liu, Yaoping; Zhao, Feng; Hun, Tingting; Li, Shan; Wang, Yuguang; Sun, Weijie; Wang, Wei; Sun, Yan; Fan, Yubo
2017-11-01
Micropatterning technique has been used to control single cell geometry in many researches, however, this is no report that it is used to control multicelluar geometry, which not only control single cell geometry but also organize those cells by a certain pattern. In this work, a composite protein micropattern is developed to control both cell shape and cell location simultaneously. The composite micropattern consists of a central circle 15 μm in diameter for single-cell capture, surrounded by small, square arrays (3 μm × 3 μm) for cell spreading. This is surrounded by a border 2 μm wide for restricting cell edges. The composite pattern results in two-cell and three-cell capture efficiencies of 32.1% ± 1.94% and 24.2% ± 2.89%, respectively, representing an 8.52% and 9.58% increase, respectively, over rates of original patterns. Fluorescent imaging of cytoskeleton alignment demonstrates that actin is gradually aligned parallel to the direction of the entire pattern arrangement, rather than to that of a single pattern. This indicates that cell arrangement is also an important factor in determining cell physiology. This composite micropattern could be a potential method to precisely control multi-cells for cell junctions, cell interactions, cell signal transduction, and eventually for tissue rebuilding study. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 105A: 3093-3101, 2017. © 2017 Wiley Periodicals, Inc.
Georges, Marc P; Vandenrijt, Jean-François; Thizy, Cédric; Alexeenko, Igor; Pedrini, Giancarlo; Vollheim, Birgit; Lopez, Ion; Jorge, Iagoba; Rochet, Jonathan; Osten, Wolfgang
2014-10-20
Holographic interferometry in the thermal wavelengths range, combining a CO(2) laser and digital hologram recording with a microbolometer array based camera, allows simultaneously capturing temperature and surface shape information about objects. This is due to the fact that the holograms are affected by the thermal background emitted by objects at room temperature. We explain the setup and the processing of data which allows decoupling the two types of information. This natural data fusion can be advantageously used in a variety of nondestructive testing applications.
Image charge effects on electron capture by dust grains in dusty plasmas.
Jung, Y D; Tawara, H
2001-07-01
Electron-capture processes by negatively charged dust grains from hydrogenic ions in dusty plasmas are investigated in accordance with the classical Bohr-Lindhard model. The attractive interaction between the electron in a hydrogenic ion and its own image charge inside the dust grain is included to obtain the total interaction energy between the electron and the dust grain. The electron-capture radius is determined by the total interaction energy and the kinetic energy of the released electron in the frame of the projectile dust grain. The classical straight-line trajectory approximation is applied to the motion of the ion in order to visualize the electron-capture cross section as a function of the impact parameter, kinetic energy of the projectile ion, and dust charge. It is found that the image charge inside the dust grain plays a significant role in the electron-capture process near the surface of the dust grain. The electron-capture cross section is found to be quite sensitive to the collision energy and dust charge.
FRAP Analysis: Accounting for Bleaching during Image Capture
Wu, Jun; Shekhar, Nandini; Lele, Pushkar P.; Lele, Tanmay P.
2012-01-01
The analysis of Fluorescence Recovery After Photobleaching (FRAP) experiments involves mathematical modeling of the fluorescence recovery process. An important feature of FRAP experiments that tends to be ignored in the modeling is that there can be a significant loss of fluorescence due to bleaching during image capture. In this paper, we explicitly include the effects of bleaching during image capture in the model for the recovery process, instead of correcting for the effects of bleaching using reference measurements. Using experimental examples, we demonstrate the usefulness of such an approach in FRAP analysis. PMID:22912750
Free viewpoint TV and its international standardization
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2009-05-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.
NASA Astrophysics Data System (ADS)
Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian
2017-09-01
Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.
Scholkmann, Felix; Holper, Lisa; Wolf, Ursula; Wolf, Martin
2013-11-27
Since the first demonstration of how to simultaneously measure brain activity using functional magnetic resonance imaging (fMRI) on two subjects about 10 years ago, a new paradigm in neuroscience is emerging: measuring brain activity from two or more people simultaneously, termed "hyperscanning". The hyperscanning approach has the potential to reveal inter-personal brain mechanisms underlying interaction-mediated brain-to-brain coupling. These mechanisms are engaged during real social interactions, and cannot be captured using single-subject recordings. In particular, functional near-infrared imaging (fNIRI) hyperscanning is a promising new method, offering a cost-effective, easy to apply and reliable technology to measure inter-personal interactions in a natural context. In this short review we report on fNIRI hyperscanning studies published so far and summarize opportunities and challenges for future studies.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
High Density Aerial Image Matching: State-Of and Future Prospects
NASA Astrophysics Data System (ADS)
Haala, N.; Cavegn, S.
2016-06-01
Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
NASA CloudSat Captures Hurricane Daniel Transformation
2006-07-25
Hurricane Daniel intensified between July 18 and July 23rd. NASA new CloudSat satellite was able to capture and confirm this transformation in its side-view images of Hurricane Daniel as seen in this series of images
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
Chaudhery, Vikram; Huang, Cheng-Sheng; Pokhriyal, Anusha; Polans, James; Cunningham, Brian T.
2011-01-01
By combining photonic crystal label-free biosensor imaging with photonic crystal enhanced fluorescence, it is possible to selectively enhance the fluorescence emission from regions of the PC surface based upon the density of immobilized capture molecules. A label-free image of the capture molecules enables determination of optimal coupling conditions of the laser used for fluorescence imaging of the photonic crystal surface on a pixel-by-pixel basis, allowing maximization of fluorescence enhancement factor from regions incorporating a biomolecule capture spot and minimization of background autofluorescence from areas between capture spots. This capability significantly improves the contrast of enhanced fluorescent images, and when applied to an antibody protein microarray, provides a substantial advantage over conventional fluorescence microscopy. Using the new approach, we demonstrate detection limits as low as 0.97 pg/ml for a representative protein biomarker in buffer. PMID:22109210
Chaudhery, Vikram; Huang, Cheng-Sheng; Pokhriyal, Anusha; Polans, James; Cunningham, Brian T
2011-11-07
By combining photonic crystal label-free biosensor imaging with photonic crystal enhanced fluorescence, it is possible to selectively enhance the fluorescence emission from regions of the PC surface based upon the density of immobilized capture molecules. A label-free image of the capture molecules enables determination of optimal coupling conditions of the laser used for fluorescence imaging of the photonic crystal surface on a pixel-by-pixel basis, allowing maximization of fluorescence enhancement factor from regions incorporating a biomolecule capture spot and minimization of background autofluorescence from areas between capture spots. This capability significantly improves the contrast of enhanced fluorescent images, and when applied to an antibody protein microarray, provides a substantial advantage over conventional fluorescence microscopy. Using the new approach, we demonstrate detection limits as low as 0.97 pg/ml for a representative protein biomarker in buffer.
Dielectrophoretic Capture and Genetic Analysis of Single Neuroblastoma Tumor Cells
Carpenter, Erica L.; Rader, JulieAnn; Ruden, Jacob; Rappaport, Eric F.; Hunter, Kristen N.; Hallberg, Paul L.; Krytska, Kate; O’Dwyer, Peter J.; Mosse, Yael P.
2014-01-01
Our understanding of the diversity of cells that escape the primary tumor and seed micrometastases remains rudimentary, and approaches for studying circulating and disseminated tumor cells have been limited by low throughput and sensitivity, reliance on single parameter sorting, and a focus on enumeration rather than phenotypic and genetic characterization. Here, we utilize a highly sensitive microfluidic and dielectrophoretic approach for the isolation and genetic analysis of individual tumor cells. We employed fluorescence labeling to isolate 208 single cells from spiking experiments conducted with 11 cell lines, including 8 neuroblastoma cell lines, and achieved a capture sensitivity of 1 tumor cell per 106 white blood cells (WBCs). Sample fixation or freezing had no detectable effect on cell capture. Point mutations were accurately detected in the whole genome amplification product of captured single tumor cells but not in negative control WBCs. We applied this approach to capture 144 single tumor cells from 10 bone marrow samples of patients suffering from neuroblastoma. In this pediatric malignancy, high-risk patients often exhibit wide-spread hematogenous metastasis, but access to primary tumor can be difficult or impossible. Here, we used flow-based sorting to pre-enrich samples with tumor involvement below 0.02%. For all patients for whom a mutation in the Anaplastic Lymphoma Kinase gene had already been detected in their primary tumor, the same mutation was detected in single cells from their marrow. These findings demonstrate a novel, non-invasive, and adaptable method for the capture and genetic analysis of single tumor cells from cancer patients. PMID:25133137
High dynamic range bio-molecular ion microscopy with the Timepix detector.
Jungmann, Julia H; MacAleese, Luke; Visser, Jan; Vrakking, Marc J J; Heeren, Ron M A
2011-10-15
Highly parallel, active pixel detectors enable novel detection capabilities for large biomolecules in time-of-flight (TOF) based mass spectrometry imaging (MSI). In this work, a 512 × 512 pixel, bare Timepix assembly combined with chevron microchannel plates (MCP) captures time-resolved images of several m/z species in a single measurement. Mass-resolved ion images from Timepix measurements of peptide and protein standards demonstrate the capability to return both mass-spectral and localization information of biologically relevant analytes from matrix-assisted laser desorption ionization (MALDI) on a commercial ion microscope. The use of a MCP-Timepix assembly delivers an increased dynamic range of several orders of magnitude. The Timepix returns defined mass spectra already at subsaturation MCP gains, which prolongs the MCP lifetime and allows the gain to be optimized for image quality. The Timepix peak resolution is only limited by the resolution of the in-pixel measurement clock. Oligomers of the protein ubiquitin were measured up to 78 kDa. © 2011 American Chemical Society
Pan, Han; Jing, Zhongliang; Qiao, Lingfeng; Li, Minzhe
2017-09-25
Image restoration is a difficult and challenging problem in various imaging applications. However, despite of the benefits of a single overcomplete dictionary, there are still several challenges for capturing the geometric structure of image of interest. To more accurately represent the local structures of the underlying signals, we propose a new problem formulation for sparse representation with block-orthogonal constraint. There are three contributions. First, a framework for discriminative structured dictionary learning is proposed, which leads to a smooth manifold structure and quotient search spaces. Second, an alternating minimization scheme is proposed after taking both the cost function and the constraints into account. This is achieved by iteratively alternating between updating the block structure of the dictionary defined on Grassmann manifold and sparsifying the dictionary atoms automatically. Third, Riemannian conjugate gradient is considered to track local subspaces efficiently with a convergence guarantee. Extensive experiments on various datasets demonstrate that the proposed method outperforms the state-of-the-art methods on the removal of mixed Gaussian-impulse noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, J; Yoon, D; Suh, T
2014-06-01
Purpose: The aim of our proposed system is to confirm the feasibility of extraction of two types of images from one positron emission tomography (PET) module with an insertable collimator for brain tumor treatment during the BNCT. Methods: Data from the PET module, neutron source, and collimator was entered in the Monte Carlo n-particle extended (MCNPX) source code. The coincidence events were first compiled on the PET detector, and then, the events of the prompt gamma ray were collected after neutron emission by using a single photon emission computed tomography (SPECT) collimator on the PET. The obtaining of full widthmore » at half maximum (FWHM) values from the energy spectrum was performed to collect effective events for reconstructed image. In order to evaluate the images easily, five boron regions in a brain phantom were used. The image profiles were extracted from the region of interest (ROI) of a phantom. The image was reconstructed using the ordered subsets expectation maximization (OSEM) reconstruction algorithm. The image profiles and the receiver operating characteristic (ROC) curve were compiled for quantitative analysis from the two kinds of reconstructed image. Results: The prompt gamma ray energy peak of 478 keV appeared in the energy spectrum with a FWHM of 41 keV (6.4%). On the basis of the ROC curve in Region A to Region E, the differences in the area under the curve (AUC) of the PET and SPECT images were found to be 10.2%, 11.7%, 8.2% (center, Region C), 12.6%, and 10.5%, respectively. Conclusion: We attempted to acquire the PET and SPECT images simultaneously using only PET without an additional isotope. Single photon images were acquired using an insertable collimator on a PET detector. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.2009 00420) and the Radiation Technology R and D program (Grant No.2013M2A2A7043498), Republic of Korea.« less
NASA Astrophysics Data System (ADS)
Hales, Brian; Katabuchi, Tatsuya; Igashira, Masayuki; Terada, Kazushi; Hayashizaki, Noriyosu; Kobayashi, Tooru
2017-12-01
A test version of a prompt-gamma single photon emission computed tomography (PG-SPECT) system for boron neutron capture therapy (BNCT) using a CdZnTe (CZT) semiconductor detector with a secondary BGO anti-Compton suppression detector has been designed. A phantom with healthy tissue region of pure water, and 2 tumor regions of 5 wt% borated polyethylene was irradiated to a fluence of 1.3 × 109 n/cm2. The number of 478 keV foreground, background, and net counts were measured for each detector position and angle. Using only experimentally measured net counts, an image of the 478 keV production from the 10B(n , α) 7Li* reaction was reconstructed. Using Monte Carlo simulation and the experimentally measured background counts, the reliability of the system under clinically accurate parameters was extrapolated. After extrapolation, it was found that the value of the maximum-value pixel in the reconstructed 478 keV γ-ray production image overestimates the simulated production by an average of 9.2%, and that the standard deviation associated with the same value is 11.4%.
Investigation of sparsity metrics for autofocusing in digital holographic microscopy
NASA Astrophysics Data System (ADS)
Fan, Xin; Healy, John J.; Hennelly, Bryan M.
2017-05-01
Digital holographic microscopy (DHM) is an optoelectronic technique that is made up of two parts: (i) the recording of the interference pattern of the diffraction pattern of an object and a known reference wavefield using a digital camera and (ii) the numerical reconstruction of the complex object wavefield using the recorded interferogram and a distance parameter as input. The latter is based on the simulation of optical propagation from the camera plane to a plane at any arbitrary distance from the camera. A key advantage of DHM over conventional microscopy is that both the phase and intensity information of the object can be recovered at any distance, using only one capture, and this facilitates the recording of scenes that may change dynamically and that may otherwise go in and out of focus. Autofocusing using traditional microscopy requires mechanical movement of the translation stage or the microscope objective, and multiple image captures that are then compared using some metric. Autofocusing in DHM is similar, except that the sequence of intensity images, to which the metric is applied, is generated numerically from a single capture. We recently investigated the application of a number of sparsity metrics for DHM autofocusing and in this paper we extend this work to include more such metrics, and apply them over a greater range of biological diatom cells and magnification/numerical apertures. We demonstrate for the first time that these metrics may be grouped together according to matching behavior following high pass filtering.
NASA Astrophysics Data System (ADS)
Wang, Binbin; Socolofsky, Scott A.
2015-10-01
Development, testing, and application of a deep-sea, high-speed, stereoscopic imaging system are presented. The new system is designed for field-ready deployment, focusing on measurement of the characteristics of natural seep bubbles and droplets with high-speed and high-resolution image capture. The stereo view configuration allows precise evaluation of the physical scale of the moving particles in image pairs. Two laboratory validation experiments (a continuous bubble chain and an airstone bubble plume) were carried out to test the calibration procedure, performance of image processing and bubble matching algorithms, three-dimensional viewing, and estimation of bubble size distribution and volumetric flow rate. The results showed that the stereo view was able to improve the individual bubble size measurement over the single-camera view by up to 90% in the two validation cases, with the single-camera being biased toward overestimation of the flow rate. We also present the first application of this imaging system in a study of natural gas seeps in the Gulf of Mexico. The high-speed images reveal the rigidity of the transparent bubble interface, indicating the presence of clathrate hydrate skins on the natural gas bubbles near the source (lowest measurement 1.3 m above the vent). We estimated the dominant bubble size at the seep site Sleeping Dragon in Mississippi Canyon block 118 to be in the range of 2-4 mm and the volumetric flow rate to be 0.2-0.3 L/min during our measurements from 17 to 21 July 2014.
Accelerated x-ray scatter projection imaging using multiple continuously moving pencil beams
NASA Astrophysics Data System (ADS)
Dydula, Christopher; Belev, George; Johns, Paul C.
2017-03-01
Coherent x-ray scatter varies with angle and photon energy in a manner dependent on the chemical composition of the scattering material, even for amorphous materials. Therefore, images generated from scattered photons can have much higher contrast than conventional projection radiographs. We are developing a scatter projection imaging prototype at the BioMedical Imaging and Therapy (BMIT) facility of the Canadian Light Source (CLS) synchrotron in Saskatoon, Canada. The best images are obtained using step-and-shoot scanning with a single pencil beam and area detector to capture sequentially the scatter pattern for each primary beam location on the sample. Primary x-ray transmission is recorded simultaneously using photodiodes. The technological challenge is to acquire the scatter data in a reasonable time. Using multiple pencil beams producing partially-overlapping scatter patterns reduces acquisition time but increases complexity due to the need for a disentangling algorithm to extract the data. Continuous sample motion, rather than step-and-shoot, also reduces acquisition time at the expense of introducing motion blur. With a five-beam (33.2 keV, 3.5 mm2 beam area) continuous sample motion configuration, a rectangular array of 12 x 100 pixels with 1 mm sampling width has been acquired in 0.4 minutes (3000 pixels per minute). The acquisition speed is 38 times the speed for single beam step-and-shoot. A system model has been developed to calculate detected scatter patterns given the material composition of the object to be imaged. Our prototype development, image acquisition of a plastic phantom and modelling are described.
Empirical single sample quantification of bias and variance in Q-ball imaging.
Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A
2018-02-06
The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.
High efficient optical remote sensing images acquisition for nano-satellite-framework
NASA Astrophysics Data System (ADS)
Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi
2017-09-01
It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.
Perceptual quality prediction on authentically distorted images using a bag of features approach
Ghadiyaram, Deepti; Bovik, Alan C.
2017-01-01
Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417
Multi-scale Pore Imaging Techniques to Characterise Heterogeneity Effects on Flow in Carbonate Rock
NASA Astrophysics Data System (ADS)
Shah, S. M.
2017-12-01
Digital rock analysis and pore-scale studies have become an essential tool in the oil and gas industry to understand and predict the petrophysical and multiphase flow properties for the assessment and exploitation of hydrocarbon reserves. Carbonate reservoirs, accounting for majority of the world's hydrocarbon reserves, are well known for their heterogeneity and multiscale pore characteristics. The pore sizes in carbonate rock can vary over orders of magnitudes, the geometry and topology parameters of pores at different scales have a great impact on flow properties. A pore-scale study is often comprised of two key procedures: 3D pore-scale imaging and numerical modelling techniques. The fundamental problem in pore-scale imaging and modelling is how to represent and model the different range of scales encountered in porous media, from the pore-scale to macroscopic petrophysical and multiphase flow properties. However, due to the restrictions of image size vs. resolution, the desired detail is rarely captured at the relevant length scales using any single imaging technique. Similarly, direct simulations of transport properties in heterogeneous rocks with broad pore size distributions are prohibitively expensive computationally. In this study, we present the advances and review the practical limitation of different imaging techniques varying from core-scale (1mm) using Medical Computed Tomography (CT) to pore-scale (10nm - 50µm) using Micro-CT, Confocal Laser Scanning Microscopy (CLSM) and Focussed Ion Beam (FIB) to characterise the complex pore structure in Ketton carbonate rock. The effect of pore structure and connectivity on the flow properties is investigated using the obtained pore scale images of Ketton carbonate using Pore Network and Lattice-Boltzmann simulation methods in comparison with experimental data. We also shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging.
DNA Motion Capture Reveals the Mechanical Properties of DNA at the Mesoscale
Price, Allen C.; Pilkiewicz, Kevin R.; Graham, Thomas G.W.; Song, Dan; Eaves, Joel D.; Loparo, Joseph J.
2015-01-01
Single-molecule studies probing the end-to-end extension of long DNAs have established that the mechanical properties of DNA are well described by a wormlike chain force law, a polymer model where persistence length is the only adjustable parameter. We present a DNA motion-capture technique in which DNA molecules are labeled with fluorescent quantum dots at specific sites along the DNA contour and their positions are imaged. Tracking these positions in time allows us to characterize how segments within a long DNA are extended by flow and how fluctuations within the molecule are correlated. Utilizing a linear response theory of small fluctuations, we extract elastic forces for the different, ∼2-μm-long segments along the DNA backbone. We find that the average force-extension behavior of the segments can be well described by a wormlike chain force law with an anomalously small persistence length. PMID:25992731
NASA Astrophysics Data System (ADS)
Saar, Martin O.
2011-11-01
Understanding the fluid dynamics of supercritical carbon dioxide (CO2) in brine- filled porous media is important for predictions of CO2 flow and brine displacement during geologic CO2 sequestration and during geothermal energy capture using sequestered CO2 as the subsurface heat extraction fluid. We investigate multiphase fluid flow in porous media employing particle image velocimetry experiments and lattice-Boltzmann fluid flow simulations at the pore scale. In particular, we are interested in the motion of a drop (representing a CO2 bubble) through an orifice in a plate, representing a simplified porous medium. In addition, we study single-phase/multicomponent reactive transport experimentally by injecting water with dissolved CO2 into rocks/sediments typically considered for CO2 sequestration to investigate how resultant fluid-mineral reactions modify permeability fields. Finally, we investigate numerically subsurface CO2 and heat transport at the geologic formation scale.
A robust molecular probe for Ångstrom-scale analytics in liquids
Nirmalraj, Peter; Thompson, Damien; Dimitrakopoulos, Christos; Gotsmann, Bernd; Dumcenco, Dumitru; Kis, Andras; Riel, Heike
2016-01-01
Traditionally, nanomaterial profiling using a single-molecule-terminated scanning probe is performed at the vacuum–solid interface often at a few Kelvin, but is not a notion immediately associated with liquid–solid interface at room temperature. Here, using a scanning tunnelling probe functionalized with a single C60 molecule stabilized in a high-density liquid, we resolve low-dimensional surface defects, atomic interfaces and capture Ångstrom-level bond-length variations in single-layer graphene and MoS2. Atom-by-atom controllable imaging contrast is demonstrated at room temperature and the electronic structure of the C60–metal probe complex within the encompassing liquid molecules is clarified using density functional theory. Our findings demonstrates that operating a robust single-molecular probe is not restricted to ultra-high vacuum and cryogenic settings. Hence the scope of high-precision analytics can be extended towards resolving sub-molecular features of organic elements and gauging ambient compatibility of emerging layered materials with atomic-scale sensitivity under experimentally less stringent conditions. PMID:27516157
NASA Astrophysics Data System (ADS)
La Mantia, David; Kumara, Nuwan; Kayani, Asghar; Simon, Anna; Tanis, John
2016-05-01
Total cross sections for single and double capture, as well as the corresponding cross sections for capture resulting in the emission of an Ar K x ray, were measured. This work was performed at Western Michigan University with the use of the tandem Van de Graaff accelerator. A 45 MeV beam of fully-stripped fluorine ions was collided with argon gas molecules in a differentially pumped cell. Surface barrier detectors were used to observe the charge changed projectiles and a Si(Li) x-ray detector, placed at 90o to the incident beam, were used to measure coincidences with Ar K x rays. The total capture cross sections are compared to previously measured cross sections in the existing literature. The coincidence cross sections, considerably smaller than the total cross sections, are found to be nearly equal for single and double capture in contrast to the total cross sections, which vary by about an order of magnitude. Possible reasons for this behavior are discussed. Supported in part by the NSF.
A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading
NASA Astrophysics Data System (ADS)
Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.
2018-05-01
Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.
Registration of Large Motion Blurred Images
2016-05-09
in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS
Sparsity-based image monitoring of crystal size distribution during crystallization
NASA Astrophysics Data System (ADS)
Liu, Tao; Huo, Yan; Ma, Cai Y.; Wang, Xue Z.
2017-07-01
To facilitate monitoring crystal size distribution (CSD) during a crystallization process by using an in-situ imaging system, a sparsity-based image analysis method is proposed for real-time implementation. To cope with image degradation arising from in-situ measurement subject to particle motion, solution turbulence, and uneven illumination background in the crystallizer, sparse representation of a real-time captured crystal image is developed based on using an in-situ image dictionary established in advance, such that the noise components in the captured image can be efficiently removed. Subsequently, the edges of a crystal shape in a captured image are determined in terms of the salience information defined from the denoised crystal images. These edges are used to derive a blur kernel for reconstruction of a denoised image. A non-blind deconvolution algorithm is given for the real-time reconstruction. Consequently, image segmentation can be easily performed for evaluation of CSD. The crystal image dictionary and blur kernels are timely updated in terms of the imaging conditions to improve the restoration efficiency. An experimental study on the cooling crystallization of α-type L-glutamic acid (LGA) is shown to demonstrate the effectiveness and merit of the proposed method.
Ultrafast chirped optical waveform recorder using a time microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Corey Vincent
2015-04-21
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Mouse blood vessel imaging by in-line x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
Zhang, Xi; Liu, Xiao-Song; Yang, Xin-Rong; Chen, Shao-Liang; Zhu, Pei-Ping; Yuan, Qing-Xi
2008-10-01
It is virtually impossible to observe blood vessels by conventional x-ray imaging techniques without using contrast agents. In addition, such x-ray systems are typically incapable of detecting vessels with diameters less than 200 µm. Here we show that vessels as small as 30 µm could be detected using in-line phase-contrast x-ray imaging without the use of contrast agents. Image quality was greatly improved by replacing resident blood with physiological saline. Furthermore, an entire branch of the portal vein from the main axial portal vein to the eighth generation of branching could be captured in a single phase-contrast image. Prior to our work, detection of 30 µm diameter blood vessels could only be achieved using x-ray interferometry, which requires sophisticated x-ray optics. Our results thus demonstrate that in-line phase-contrast x-ray imaging, using physiological saline as a contrast agent, provides an alternative to the interferometric method that can be much more easily implemented and also offers the advantage of a larger field of view. A possible application of this methodology is in animal tumor models, where it can be used to observe tumor angiogenesis and the treatment effects of antineoplastic agents.
OpenComet: An automated tool for comet assay image analysis
Gyori, Benjamin M.; Venkatachalam, Gireedhar; Thiagarajan, P.S.; Hsu, David; Clement, Marie-Veronique
2014-01-01
Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time. PMID:24624335
OpenComet: an automated tool for comet assay image analysis.
Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique
2014-01-01
Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.
Not looking yourself: The cost of self-selecting photographs for identity verification.
White, David; Burton, Amy L; Kemp, Richard I
2016-05-01
Photo-identification is based on the premise that photographs are representative of facial appearance. However, previous studies show that ratings of likeness vary across different photographs of the same face, suggesting that some images capture identity better than others. Two experiments were designed to examine the relationship between likeness judgments and face matching accuracy. In Experiment 1, we compared unfamiliar face matching accuracy for self-selected and other-selected high-likeness images. Surprisingly, images selected by previously unfamiliar viewers - after very limited exposure to a target face - were more accurately matched than self-selected images chosen by the target identity themselves. Results also revealed extremely low inter-rater agreement in ratings of likeness across participants, suggesting that perceptions of image resemblance are inherently unstable. In Experiment 2, we test whether the cost of self-selection can be explained by this general disagreement in likeness judgments between individual raters. We find that averaging across rankings by multiple raters produces image selections that provide superior identification accuracy. However, benefit of other-selection persisted for single raters, suggesting that inaccurate representations of self interfere with our ability to judge which images faithfully represent our current appearance. © 2015 The British Psychological Society.
Noise-free accurate count of microbial colonies by time-lapse shadow image analysis.
Ogawa, Hiroyuki; Nasu, Senshi; Takeshige, Motomu; Funabashi, Hisakage; Saito, Mikako; Matsuoka, Hideaki
2012-12-01
Microbial colonies in food matrices could be counted accurately by a novel noise-free method based on time-lapse shadow image analysis. An agar plate containing many clusters of microbial colonies and/or meat fragments was trans-illuminated to project their 2-dimensional (2D) shadow images on a color CCD camera. The 2D shadow images of every cluster distributed within a 3-mm thick agar layer were captured in focus simultaneously by means of a multiple focusing system, and were then converted to 3-dimensional (3D) shadow images. By time-lapse analysis of the 3D shadow images, it was determined whether each cluster comprised single or multiple colonies or a meat fragment. The analytical precision was high enough to be able to distinguish a microbial colony from a meat fragment, to recognize an oval image as two colonies contacting each other, and to detect microbial colonies hidden under a food fragment. The detection of hidden colonies is its outstanding performance in comparison with other systems. The present system attained accuracy for counting fewer than 5 colonies and is therefore of practical importance. Copyright © 2012 Elsevier B.V. All rights reserved.
Ng, David C; Tamura, Hideki; Tokuda, Takashi; Yamamoto, Akio; Matsuo, Masamichi; Nunoshita, Masahiro; Ishikawa, Yasuyuki; Shiosaka, Sadao; Ohta, Jun
2006-09-30
The aim of the present study is to demonstrate the application of complementary metal-oxide semiconductor (CMOS) imaging technology for studying the mouse brain. By using a dedicated CMOS image sensor, we have successfully imaged and measured brain serine protease activity in vivo, in real-time, and for an extended period of time. We have developed a biofluorescence imaging device by packaging the CMOS image sensor which enabled on-chip imaging configuration. In this configuration, no optics are required whereby an excitation filter is applied onto the sensor to replace the filter cube block found in conventional fluorescence microscopes. The fully packaged device measures 350 microm thick x 2.7 mm wide, consists of an array of 176 x 144 pixels, and is small enough for measurement inside a single hemisphere of the mouse brain, while still providing sufficient imaging resolution. In the experiment, intraperitoneally injected kainic acid induced upregulation of serine protease activity in the brain. These events were captured in real time by imaging and measuring the fluorescence from a fluorogenic substrate that detected this activity. The entire device, which weighs less than 1% of the body weight of the mouse, holds promise for studying freely moving animals.
Dual tracer imaging of SPECT and PET probes in living mice using a sequential protocol
Chapman, Sarah E; Diener, Justin M; Sasser, Todd A; Correcher, Carlos; González, Antonio J; Avermaete, Tony Van; Leevy, W Matthew
2012-01-01
Over the past 20 years, multimodal imaging strategies have motivated the fusion of Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT) scans with an X-ray computed tomography (CT) image to provide anatomical information, as well as a framework with which molecular and functional images may be co-registered. Recently, pre-clinical nuclear imaging technology has evolved to capture multiple SPECT or multiple PET tracers to further enhance the information content gathered within an imaging experiment. However, the use of SPECT and PET probes together, in the same animal, has remained a challenge. Here we describe a straightforward method using an integrated trimodal imaging system and a sequential dosing/acquisition protocol to achieve dual tracer imaging with 99mTc and 18F isotopes, along with anatomical CT, on an individual specimen. Dosing and imaging is completed so that minimal animal manipulations are required, full trimodal fusion is conserved, and tracer crosstalk including down-scatter of the PET tracer in SPECT mode is avoided. This technique will enhance the ability of preclinical researchers to detect multiple disease targets and perform functional, molecular, and anatomical imaging on individual specimens to increase the information content gathered within longitudinal in vivo studies. PMID:23145357
Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination
NASA Astrophysics Data System (ADS)
Spigulis, Janis; Oshina, Ilze; Berzina, Anna; Bykov, Alexander
2017-09-01
Chromophore distribution maps are useful tools for skin malformation severity assessment and for monitoring of skin recovery after burns, surgeries, and other interactions. The chromophore maps can be obtained by processing several spectral images of skin, e.g., captured by hyperspectral or multispectral cameras during seconds or even minutes. To avoid motion artifacts and simplify the procedure, a single-snapshot technique for mapping melanin, oxyhemoglobin, and deoxyhemoglobin of in-vivo skin by a smartphone under simultaneous three-wavelength (448-532-659 nm) laser illumination is proposed and examined. Three monochromatic spectral images related to the illumination wavelengths were extracted from the smartphone camera RGB image data set with respect to crosstalk between the RGB detection bands. Spectral images were further processed accordingly to Beer's law in a three chromophore approximation. Photon absorption path lengths in skin at the exploited wavelengths were estimated by means of Monte Carlo simulations. The technique was validated clinically on three kinds of skin lesions: nevi, hemangiomas, and seborrheic keratosis. Design of the developed add-on laser illumination system, image-processing details, and the results of clinical measurements are presented and discussed.
Infrared imaging results of an excited planar jet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrington, R.B.
1991-12-01
Planar jets are used for many applications including heating, cooling, and ventilation. Generally such a jet is designed to provide good mixing within an enclosure. In building applications, the jet provides both thermal comfort and adequate indoor air quality. Increased mixing rates may lead to lower short-circuiting of conditioned air, elimination of dead zones within the occupied zone, reduced energy costs, increased occupant comfort, and higher indoor air quality. This paper discusses using an infrared imaging system to show the effect of excitation of a jet on the spread angle and on the jet mixing efficiency. Infrared imaging captures amore » large number of data points in real time (over 50,000 data points per image) providing significant advantages over single-point measurements. We used a screen mesh with a time constant of approximately 0.3 seconds as a target for the infrared camera to detect temperature variations in the jet. The infrared images show increased jet spread due to excitation of the jet. Digital data reduction and analysis show change in jet isotherms and quantify the increased mixing caused by excitation. 17 refs., 20 figs.« less
Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects
NASA Astrophysics Data System (ADS)
Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji
2017-03-01
High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.
Coded aperture solution for improving the performance of traffic enforcement cameras
NASA Astrophysics Data System (ADS)
Masoudifar, Mina; Pourreza, Hamid Reza
2016-10-01
A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.
Czarnuch, Stephen; Mihailidis, Alex
2015-03-27
We present the development and evaluation of a robust hand tracker based on single overhead depth images for use in the COACH, an assistive technology for people with dementia. The new hand tracker was designed to overcome limitations experienced by the COACH in previous clinical trials. We train a random decision forest classifier using ∼5000 manually labeled, unbalanced, training images. Hand positions from the classifier are translated into task actions based on proximity to environmental objects. Tracker performance is evaluated using a large set of ∼24 000 manually labeled images captured from 41 participants in a fully-functional washroom, and compared to the system's previous colour-based hand tracker. Precision and recall were 0.994 and 0.938 for the depth tracker compared to 0.981 and 0.822 for the colour tracker with the current data, and 0.989 and 0.466 in the previous study. The improved tracking performance supports integration of the depth-based tracker into the COACH toward unsupervised, real-world trials. Implications for Rehabilitation The COACH is an intelligent assistive technology that can enable people with cognitive disabilities to stay at home longer, supporting the concept of aging-in-place. Automated prompting systems, a type of intelligent assistive technology, can help to support the independent completion of activities of daily living, increasing the independence of people with cognitive disabilities while reducing the burden of care experienced by caregivers. Robust motion tracking using depth imaging supports the development of intelligent assistive technologies like the COACH. Robust motion tracking also has application to other forms of assistive technologies including gaming, human-computer interaction and automated assessments.
An Example-Based Super-Resolution Algorithm for Selfie Images
William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep
2016-01-01
A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500
NASA Astrophysics Data System (ADS)
Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.
2010-11-01
We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Design of a Single-Cell Positioning Controller Using Electroosmotic Flow and Image Processing
Ay, Chyung; Young, Chao-Wang; Chen, Jhong-Yin
2013-01-01
The objective of the current research was not only to provide a fast and automatic positioning platform for single cells, but also improved biomolecular manipulation techniques. In this study, an automatic platform for cell positioning using electroosmotic flow and image processing technology was designed. The platform was developed using a PCI image acquisition interface card for capturing images from a microscope and then transferring them to a computer using human-machine interface software. This software was designed by the Laboratory Virtual Instrument Engineering Workbench, a graphical language for finding cell positions and viewing the driving trace, and the fuzzy logic method for controlling the voltage or time of an electric field. After experiments on real human leukemic cells (U-937), the success of the cell positioning rate achieved by controlling the voltage factor reaches 100% within 5 s. A greater precision is obtained when controlling the time factor, whereby the success rate reaches 100% within 28 s. Advantages in both high speed and high precision are attained if these two voltage and time control methods are combined. The control speed with the combined method is about 5.18 times greater than that achieved by the time method, and the control precision with the combined method is more than five times greater than that achieved by the voltage method. PMID:23698272
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
The Cooking and Pneumonia Study (CAPS) in Malawi: Implementation of Remote Source Data Verification
Weston, William; Smedley, James; Bennett, Andrew; Mortimer, Kevin
2016-01-01
Background Source data verification (SDV) is a data monitoring procedure which compares the original records with the Case Report Form (CRF). Traditionally, on-site SDV relies on monitors making multiples visits to study sites requiring extensive resources. The Cooking And Pneumonia Study (CAPS) is a 24- month village-level cluster randomized controlled trial assessing the effectiveness of an advanced cook-stove intervention in preventing pneumonia in children under five in rural Malawi (www.capstudy.org). CAPS used smartphones to capture digital images of the original records on an electronic CRF (eCRF). In the present study, descriptive statistics are used to report the experience of electronic data capture with remote SDV in a challenging research setting in rural Malawi. Methods At three monthly intervals, fieldworkers, who were employed by CAPS, captured pneumonia data from the original records onto the eCRF. Fieldworkers also captured digital images of the original records. Once Internet connectivity was available, the data captured on the eCRF and the digital images of the original records were uploaded to a web-based SDV application. This enabled SDV to be conducted remotely from the UK. We conducted SDV of the pneumonia data (occurrence, severity, and clinical indicators) recorded in the eCRF with the data in the digital images of the original records. Result 664 episodes of pneumonia were recorded after 6 months of follow-up. Of these 664 episodes, 611 (92%) had a finding of pneumonia in the original records. All digital images of the original records were clear and legible. Conclusion Electronic data capture using eCRFs on mobile technology is feasible in rural Malawi. Capturing digital images of the original records in the field allows remote SDV to be conducted efficiently and securely without requiring additional field visits. We recommend these approaches in similar settings, especially those with health endpoints. PMID:27355447
The Cooking and Pneumonia Study (CAPS) in Malawi: Implementation of Remote Source Data Verification.
Weston, William; Smedley, James; Bennett, Andrew; Mortimer, Kevin
2016-01-01
Source data verification (SDV) is a data monitoring procedure which compares the original records with the Case Report Form (CRF). Traditionally, on-site SDV relies on monitors making multiples visits to study sites requiring extensive resources. The Cooking And Pneumonia Study (CAPS) is a 24- month village-level cluster randomized controlled trial assessing the effectiveness of an advanced cook-stove intervention in preventing pneumonia in children under five in rural Malawi (www.capstudy.org). CAPS used smartphones to capture digital images of the original records on an electronic CRF (eCRF). In the present study, descriptive statistics are used to report the experience of electronic data capture with remote SDV in a challenging research setting in rural Malawi. At three monthly intervals, fieldworkers, who were employed by CAPS, captured pneumonia data from the original records onto the eCRF. Fieldworkers also captured digital images of the original records. Once Internet connectivity was available, the data captured on the eCRF and the digital images of the original records were uploaded to a web-based SDV application. This enabled SDV to be conducted remotely from the UK. We conducted SDV of the pneumonia data (occurrence, severity, and clinical indicators) recorded in the eCRF with the data in the digital images of the original records. 664 episodes of pneumonia were recorded after 6 months of follow-up. Of these 664 episodes, 611 (92%) had a finding of pneumonia in the original records. All digital images of the original records were clear and legible. Electronic data capture using eCRFs on mobile technology is feasible in rural Malawi. Capturing digital images of the original records in the field allows remote SDV to be conducted efficiently and securely without requiring additional field visits. We recommend these approaches in similar settings, especially those with health endpoints.
In-cylinder air-flow characteristics of different intake port geometries using tomographic PIV
NASA Astrophysics Data System (ADS)
Agarwal, Avinash Kumar; Gadekar, Suresh; Singh, Akhilendra Pratap
2017-09-01
For improving the in-cylinder flow characteristics of intake air/charge and for strengthening the turbulence intensity, specific intake port geometries have shown significant potential in compression ignition engines. In this experimental study, effects of intake port geometries on air-flow characteristics were investigated using tomographic particle imaging velocimetry (TPIV). Experiments were performed using three experimental conditions, namely, swirl port open (SPO), tangential port open (TPO), and both port open (BPO) configurations in a single cylinder optical research engine. Flow investigations were carried out in a volumetric section located in the middle of the intake and exhaust valves. Particle imaging velocimetry (PIV) images were captured using two high speed cameras at a crank angle resolution of 2° in the intake and compression strokes. The captured PIV images were then pre-processed and post-processed to obtain the final air-flow-field. Effects of these two intake ports on flow-field are presented for air velocity, vorticity, average absolute velocity, and turbulent kinetic energy. Analysis of these flow-fields suggests the dominating nature of the swirl port over the tangential port for the BPO configuration and higher rate of flow energy dissipation for the TPO configuration compared to the SPO and BPO configurations. These findings of TPIV investigations were experimentally verified by combustion and particulate characteristics of the test engine in thermal cylinder head configuration. Combustion results showed that the SPO configuration resulted in superior combustion amongst all three port configurations. Particulate characteristics showed that the TPO configuration resulted in higher particulate compared to other port configurations.
Preliminary experiments on quantification of skin condition
NASA Astrophysics Data System (ADS)
Kitajima, Kenzo; Iyatomi, Hitoshi
2014-03-01
In this study, we investigated a preliminary assessment method for skin conditions such as a moisturizing property and its fineness of the skin with an image analysis only. We captured a facial images from volunteer subjects aged between 30s and 60s by Pocket Micro (R) device (Scalar Co., Japan). This device has two image capturing modes; the normal mode and the non-reflection mode with the aid of the equipped polarization filter. We captured skin images from a total of 68 spots from subjects' face using both modes (i.e. total of 136 skin images). The moisture-retaining property of the skin and subjective evaluation score of the skin fineness in 5-point scale for each case were also obtained in advance as a gold standard (their mean and SD were 35.15 +/- 3.22 (μS) and 3.45 +/- 1.17, respectively). We extracted a total of 107 image features from each image and built linear regression models for estimating abovementioned criteria with a stepwise feature selection. The developed model for estimating the skin moisture achieved the MSE of 1.92 (μS) with 6 selected parameters, while the model for skin fineness achieved that of 0.51 scales with 7 parameters under the leave-one-out cross validation. We confirmed the developed models predicted the moisture-retaining property and fineness of the skin appropriately with only captured image.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Effective Fingerprint Quality Estimation for Diverse Capture Sensors
Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun
2010-01-01
Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632
Restoration of motion blurred images
NASA Astrophysics Data System (ADS)
Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.
2017-08-01
Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.
Wave analysis of a plenoptic system and its applications
NASA Astrophysics Data System (ADS)
Shroff, Sapna A.; Berkner, Kathrin
2013-03-01
Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.
NASA Astrophysics Data System (ADS)
Brown, Christopher M.; Maggio-Price, Lillian; Seibel, Eric J.
2007-02-01
Scanning fiber endoscope (SFE) technology has shown promise as a minimally invasive optical imaging tool. To date, it is capable of capturing full-color 500-line images, at 15 Hz frame rate in vivo, as a 1.6 mm diameter endoscope. The SFE uses a singlemode optical fiber actuated at mechanical resonance to scan a light spot over tissue while backscattered or fluorescent light at each pixel is detected in time series using several multimode optical fibers. We are extending the capability of the SFE from a RGB reflectance imaging device to a diagnostic tool by imaging laser induced fluorescence (LIF) in tissue, allowing for correlation of endogenous fluorescence to tissue state. Design of the SFE for diagnostic imaging is guided by a comparison of single point spectra acquired from an inflammatory bowel disease (IBD) model to tissue histology evaluated by a pathologist. LIF spectra were acquired by illuminating tissue with a 405 nm light source and detecting intrinsic fluorescence with a multimode optical fiber. The IBD model used in this study was mdr1a-/- mice, where IBD was modulated by infection with Helicobacter bilis. IBD lesions in the mouse model ranged from mild to marked hyperplasia and dysplasia, from the distal colon to the cecum. A principle components analysis (PCA) was conducted on single point spectra of control and IBD tissue. PCA allowed for differentiation between healthy and dysplastic tissue, indicating that emission wavelengths from 620 - 650 nm were best able to differentiate diseased tissue and inflammation from normal healthy tissue.
High speed all optical shear wave imaging optical coherence elastography (Conference Presentation)
NASA Astrophysics Data System (ADS)
Song, Shaozhen; Hsieh, Bao-Yu; Wei, Wei; Shen, Tueng; O'Donnell, Matthew; Wang, Ruikang K.
2016-03-01
Optical Coherence Elastography (OCE) is a non-invasive testing modality that maps the mechanical property of soft tissues with high sensitivity and spatial resolution using phase-sensitive optical coherence tomography (PhS-OCT). Shear wave OCE (SW-OCE) is a leading technique that relies on the speed of propagating shear waves to provide a quantitative elastography. Previous shear wave imaging OCT techniques are based on repeated M-B scans, which have several drawbacks such as long acquisition time and repeated wave stimulations. Recent developments of Fourier domain mode-locked high-speed swept-source OCT system has enabled enough speed to perform KHz B-scan rate OCT imaging. Here we propose ultra-high speed, single shot shear wave imaging to capture single-shot transient shear wave propagation to perform SW-OCE. The frame rate of shear wave imaging is 16 kHz, at A-line rate of ~1.62 MHz, which allows the detection of high-frequency shear wave of up to 8 kHz. The shear wave is generated photothermal-acoustically, by ultra-violet pulsed laser, which requires no contact to OCE subjects, while launching high frequency shear waves that carries rich localized elasticity information. The image acquisition and processing can be performed at video-rate, which enables real-time 3D elastography. SW-OCE measurements are demonstrated on tissue-mimicking phantoms and porcine ocular tissue. This approach opens up the feasibility to perform real-time 3D SW-OCE in clinical applications, to obtain high-resolution localized quantitative measurement of tissue biomechanical property.
Wang, E; Babbey, C M; Dunn, K W
2005-05-01
Fluorescence microscopy of the dynamics of living cells presents a special challenge to a microscope imaging system, simultaneously requiring both high spatial resolution and high temporal resolution, but with illumination levels low enough to prevent fluorophore damage and cytotoxicity. We have compared the high-speed Yokogawa CSU10 spinning disc confocal system with several conventional single-point scanning confocal (SPSC) microscopes, using the relationship between image signal-to-noise ratio and fluorophore photobleaching as an index of system efficiency. These studies demonstrate that the efficiency of the CSU10 consistently exceeds that of the SPSC systems. The high efficiency of the CSU10 means that quality images can be collected with much lower levels of illumination; the CSU10 was capable of achieving the maximum signal-to-noise of an SPSC system at illumination levels that incur only at 1/15th of the rate of the photobleaching of the SPSC system. Although some of the relative efficiency of the CSU10 system may be attributed to the use of a CCD rather than a photomultiplier detector system, our analyses indicate that high-speed imaging with the SPSC system is limited by fluorescence saturation at the high levels of illumination frequently needed to collect images at high frame rates. The high speed, high efficiency and freedom from fluorescence saturation combine to make the CSU10 effective for extended imaging of living cells at rates capable of capturing the three-dimensional motion of endosomes moving up to several micrometres per second.
Evaluation of Particle Image Velocimetry Measurement Using Multi-wavelength Illumination
NASA Astrophysics Data System (ADS)
Lai, HC; Chew, TF; Razak, NA
2018-05-01
In past decades, particle image velocimetry (PIV) has been widely used in measuring fluid flow and a lot of researches have been done to improve the PIV technique. Many researches are conducted on high power light emitting diode (HPLED) to replace the traditional laser illumination system in PIV. As an extended work to the research in PIV illumination system, two high power light emitting diodes (HPLED) with different wavelength are introduced as PIV illumination system. The objective of this research is using dual colours LED to directly replace laser as illumination system in order for a single frame to be captured by a normal camera instead of a high speed camera. Dual colours HPLEDs PIV are capable with single frame double pulses mode which able to plot the velocity vector of the particles after correlation. An illumination system is designed and fabricated and evaluated by measuring water flow in a small tank. The results indicates that HPLEDs promises a few advantages in terms of cost, safety and performance. It has a high potential to be develop into an alternative for PIV in the near future.
Droplet-based microfluidics platform for measurement of rapid erythrocyte water transport
Jin, Byung-Ju; Esteva-Font, Cristina; Verkman, A.S.
2015-01-01
Cell membrane water permeability is an important determinant of epithelial fluid secretion, tissue swelling, angiogenesis, tumor spread and other biological processes. Cellular water channels, the aquaporins, are important drug targets. Water permeability is generally measured from the kinetics of cell volume change in response to an osmotic gradient. Here, we developed a microfluidics platform in which cells expressing a cytoplasmic, volume-sensing fluorescent dye are rapidly subjected to an osmotic gradient by solution mixing inside a ~ 0.1 nL droplet surrounded by oil. Solution mixing time was < 10 ms. Osmotic water permeability was deduced from a single, time-integrated fluorescence image of an observation area in which time after mixing is determined by spatial position. Water permeability was accurately measured in aquaporin-expressing erythrocytes with half-times for osmotic equilibration down to < 50 ms. Compared with conventional water permeability measurements using costly stopped-flow instrumentation, the microfluidics platform here utilizes sub-microliter blood sample volume, does not suffer from mixing artifact, and replaces challenging kinetic measurements by a single image capture using a standard laboratory fluorescence microscope. PMID:26159099
NASA Astrophysics Data System (ADS)
Shiraishi, Yuhki; Takeda, Fumiaki
In this research, we have developed a sorting system for fishes, which is comprised of a conveyance part, a capturing image part, and a sorting part. In the conveyance part, we have developed an independent conveyance system in order to separate one fish from an intertwined group of fishes. After the image of the separated fish is captured in the capturing part, a rotation invariant feature is extracted using two-dimensional fast Fourier transform, which is the mean value of the power spectrum with the same distance from the origin in the spectrum field. After that, the fishes are classified by three-layered feed-forward neural networks. The experimental results show that the developed system classifies three kinds of fishes captured in various angles with the classification ratio of 98.95% for 1044 captured images of five fishes. The other experimental results show the classification ratio of 90.7% for 300 fishes by 10-fold cross validation method.
Capture and X-ray diffraction studies of protein microcrystals in a microfluidic trap array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyubimov, Artem Y.; Stanford University, Stanford, CA 94305; Stanford University, Stanford, CA 94305
A microfluidic platform has been developed for the capture and X-ray analysis of protein microcrystals, affording a means to improve the efficiency of XFEL and synchrotron experiments. X-ray free-electron lasers (XFELs) promise to enable the collection of interpretable diffraction data from samples that are refractory to data collection at synchrotron sources. At present, however, more efficient sample-delivery methods that minimize the consumption of microcrystalline material are needed to allow the application of XFEL sources to a wide range of challenging structural targets of biological importance. Here, a microfluidic chip is presented in which microcrystals can be captured at fixed, addressablemore » points in a trap array from a small volume (<10 µl) of a pre-existing slurry grown off-chip. The device can be mounted on a standard goniostat for conducting diffraction experiments at room temperature without the need for flash-cooling. Proof-of-principle tests with a model system (hen egg-white lysozyme) demonstrated the high efficiency of the microfluidic approach for crystal harvesting, permitting the collection of sufficient data from only 265 single-crystal still images to permit determination and refinement of the structure of the protein. This work shows that microfluidic capture devices can be readily used to facilitate data collection from protein microcrystals grown in traditional laboratory formats, enabling analysis when cryopreservation is problematic or when only small numbers of crystals are available. Such microfluidic capture devices may also be useful for data collection at synchrotron sources.« less
Daugherty, Bethany L; Schap, TusaRebecca E; Ettienne-Gittens, Reynolette; Zhu, Fengqing M; Bosch, Marc; Delp, Edward J; Ebert, David S; Kerr, Deborah A; Boushey, Carol J
2012-04-13
The development of a mobile telephone food record has the potential to ameliorate much of the burden associated with current methods of dietary assessment. When using the mobile telephone food record, respondents capture an image of their foods and beverages before and after eating. Methods of image analysis and volume estimation allow for automatic identification and volume estimation of foods. To obtain a suitable image, all foods and beverages and a fiducial marker must be included in the image. To evaluate a defined set of skills among adolescents and adults when using the mobile telephone food record to capture images and to compare the perceptions and preferences between adults and adolescents regarding their use of the mobile telephone food record. We recruited 135 volunteers (78 adolescents, 57 adults) to use the mobile telephone food record for one or two meals under controlled conditions. Volunteers received instruction for using the mobile telephone food record prior to their first meal, captured images of foods and beverages before and after eating, and participated in a feedback session. We used chi-square for comparisons of the set of skills, preferences, and perceptions between the adults and adolescents, and McNemar test for comparisons within the adolescents and adults. Adults were more likely than adolescents to include all foods and beverages in the before and after images, but both age groups had difficulty including the entire fiducial marker. Compared with adolescents, significantly more adults had to capture more than one image before (38% vs 58%, P = .03) and after (25% vs 50%, P = .008) meal session 1 to obtain a suitable image. Despite being less efficient when using the mobile telephone food record, adults were more likely than adolescents to perceive remembering to capture images as easy (P < .001). A majority of both age groups were able to follow the defined set of skills; however, adults were less efficient when using the mobile telephone food record. Additional interactive training will likely be necessary for all users to provide extra practice in capturing images before entering a free-living situation. These results will inform age-specific development of the mobile telephone food record that may translate to a more accurate method of dietary assessment.
Matsushima, Kyoji
2008-07-01
Rotational transformation based on coordinate rotation in Fourier space is a useful technique for simulating wave field propagation between nonparallel planes. This technique is characterized by fast computation because the transformation only requires executing a fast Fourier transform twice and a single interpolation. It is proved that the formula of the rotational transformation mathematically satisfies the Helmholtz equation. Moreover, to verify the formulation and its usefulness in wave optics, it is also demonstrated that the transformation makes it possible to reconstruct an image on arbitrarily tilted planes from a wave field captured experimentally by using digital holography.
Method and apparatus to monitor a beam of ionizing radiation
Blackburn, Brandon W.; Chichester, David L.; Watson, Scott M.; Johnson, James T.; Kinlaw, Mathew T.
2015-06-02
Methods and apparatus to capture images of fluorescence generated by ionizing radiation and determine a position of a beam of ionizing radiation generating the fluorescence from the captured images. In one embodiment, the fluorescence is the result of ionization and recombination of nitrogen in air.
ERIC Educational Resources Information Center
Mathematics Teacher, 2004
2004-01-01
Some inexpensive or free ways that enable to capture and use images in work are mentioned. The first tip demonstrates the methods of using some of the built-in capabilities of the Macintosh and Windows-based PC operating systems, and the second tip describes methods to capture and create images using SnagIt.
2016-06-25
The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was
Jiang, Chao; Zhang, Hongyan; Wang, Jia; Wang, Yaru; He, Heng; Liu, Rui; Zhou, Fangyuan; Deng, Jialiang; Li, Pengcheng; Luo, Qingming
2011-11-01
Laser speckle imaging (LSI) is a noninvasive and full-field optical imaging technique which produces two-dimensional blood flow maps of tissues from the raw laser speckle images captured by a CCD camera without scanning. We present a hardware-friendly algorithm for the real-time processing of laser speckle imaging. The algorithm is developed and optimized specifically for LSI processing in the field programmable gate array (FPGA). Based on this algorithm, we designed a dedicated hardware processor for real-time LSI in FPGA. The pipeline processing scheme and parallel computing architecture are introduced into the design of this LSI hardware processor. When the LSI hardware processor is implemented in the FPGA running at the maximum frequency of 130 MHz, up to 85 raw images with the resolution of 640×480 pixels can be processed per second. Meanwhile, we also present a system on chip (SOC) solution for LSI processing by integrating the CCD controller, memory controller, LSI hardware processor, and LCD display controller into a single FPGA chip. This SOC solution also can be used to produce an application specific integrated circuit for LSI processing.
Deep learning application: rubbish classification with aid of an android device
NASA Astrophysics Data System (ADS)
Liu, Sijiang; Jiang, Bo; Zhan, Jie
2017-06-01
Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.
Using local correlation tracking to recover solar spectral information from a slitless spectrograph
NASA Astrophysics Data System (ADS)
Courrier, Hans T.; Kankelborg, Charles C.
2018-01-01
The Multi-Order Solar EUV Spectrograph (MOSES) is a sounding rocket instrument that utilizes a concave spherical diffraction grating to form simultaneous images in the diffraction orders m=0, +1, and -1. MOSES is designed to capture high-resolution cotemporal spectral and spatial information of solar features over a large two-dimensional field of view. Our goal is to estimate the Doppler shift as a function of position for every MOSES exposure. Since the instrument is designed to operate without an entrance slit, this requires disentangling overlapping spectral and spatial information in the m=±1 images. Dispersion in these images leads to a field-dependent displacement that is proportional to Doppler shift. We identify these Doppler shift-induced displacements for the single bright emission line in the instrument passband by comparing images from each spectral order. We demonstrate the use of local correlation tracking as a means to quantify these differences between a pair of cotemporal image orders. The resulting vector displacement field is interpreted as a measurement of the Doppler shift. Since three image orders are available, we generate three Doppler maps from each exposure. These may be compared to produce an error estimate.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array.
Phillips, Zachary F; D'Ambrosio, Michael V; Tian, Lei; Rulison, Jared J; Patel, Hurshal S; Sadras, Nitin; Gande, Aditya V; Switz, Neil A; Fletcher, Daniel A; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.
Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array
Phillips, Zachary F.; D'Ambrosio, Michael V.; Tian, Lei; Rulison, Jared J.; Patel, Hurshal S.; Sadras, Nitin; Gande, Aditya V.; Switz, Neil A.; Fletcher, Daniel A.; Waller, Laura
2015-01-01
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities. PMID:25969980
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Imaging samples in silica aerogel using an experimental point spread function.
White, Amanda J; Ebel, Denton S
2015-02-01
Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology.
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
NASA Astrophysics Data System (ADS)
Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie
2009-03-01
Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.
Aflague, Tanisha F; Boushey, Carol J; Guerrero, Rachael T Leon; Ahmad, Ziad; Kerr, Deborah A; Delp, Edward J
2015-06-02
Children's readiness to use technology supports the idea of children using mobile applications for dietary assessment. Our goal was to determine if children 3-10 years could successfully use the mobile food record (mFR) to capture a usable image pair or pairs. Children in Sample 1 were tasked to use the mFR to capture an image pair of one eating occasion while attending summer camp. For Sample 2, children were tasked to record all eating occasions for two consecutive days at two time periods that were two to four weeks apart. Trained analysts evaluated images. In Sample 1, 90% (57/63) captured one usable image pair. All children (63/63) returned the mFR undamaged. Sixty-two children reported: The mFR was easy to use (89%); willingness to use the mFR again (87%); and the fiducial marker easy to manage (94%). Children in Sample 2 used the mFR at least one day at Time 1 (59/63, 94%); Time 2 (49/63, 78%); and at both times (47/63, 75%). This latter group captured 6.21 ± 4.65 and 5.65 ± 3.26 mean (± SD) image pairs for Time 1 and Time 2, respectively. Results support the potential for children to independently record dietary intakes using the mFR.
Hu, Ying S; Zhu, Quan; Elkins, Keri; Tse, Kevin; Li, Yu; Fitzpatrick, James A J; Verma, Inder M; Cang, Hu
2013-01-01
Heterochromatin in the nucleus of human embryonic cells plays an important role in the epigenetic regulation of gene expression. The architecture of heterochromatin and its dynamic organization remain elusive because of the lack of fast and high-resolution deep-cell imaging tools. We enable this task by advancing instrumental and algorithmic implementation of the localization-based super-resolution technique. We present light-sheet Bayesian super-resolution microscopy (LSBM). We adapt light-sheet illumination for super-resolution imaging by using a novel prism-coupled condenser design to illuminate a thin slice of the nucleus with high signal-to-noise ratio. Coupled with a Bayesian algorithm that resolves overlapping fluorophores from high-density areas, we show, for the first time, nanoscopic features of the heterochromatin structure in both fixed and live human embryonic stem cells. The enhanced temporal resolution allows capturing the dynamic change of heterochromatin with a lateral resolution of 50-60 nm on a time scale of 2.3 s. Light-sheet Bayesian microscopy opens up broad new possibilities of probing nanometer-scale nuclear structures and real-time sub-cellular processes and other previously difficult-to-access intracellular regions of living cells at the single-molecule, and single cell level.
Hu, Ying S; Zhu, Quan; Elkins, Keri; Tse, Kevin; Li, Yu; Fitzpatrick, James A J; Verma, Inder M; Cang, Hu
2016-01-01
Background Heterochromatin in the nucleus of human embryonic cells plays an important role in the epigenetic regulation of gene expression. The architecture of heterochromatin and its dynamic organization remain elusive because of the lack of fast and high-resolution deep-cell imaging tools. We enable this task by advancing instrumental and algorithmic implementation of the localization-based super-resolution technique. Results We present light-sheet Bayesian super-resolution microscopy (LSBM). We adapt light-sheet illumination for super-resolution imaging by using a novel prism-coupled condenser design to illuminate a thin slice of the nucleus with high signal-to-noise ratio. Coupled with a Bayesian algorithm that resolves overlapping fluorophores from high-density areas, we show, for the first time, nanoscopic features of the heterochromatin structure in both fixed and live human embryonic stem cells. The enhanced temporal resolution allows capturing the dynamic change of heterochromatin with a lateral resolution of 50–60 nm on a time scale of 2.3 s. Conclusion Light-sheet Bayesian microscopy opens up broad new possibilities of probing nanometer-scale nuclear structures and real-time sub-cellular processes and other previously difficult-to-access intracellular regions of living cells at the single-molecule, and single cell level. PMID:27795878
A two-magnet strategy for improved mixing and capture from biofluids
Doyle, Andrew B.; Haselton, Frederick R.
2016-01-01
Magnetic beads are a popular method for concentrating biomolecules from solution and have been more recently used in multistep pre-arrayed microfluidic cartridges. Typical processing strategies rely on a single magnet, resulting in a tight cluster of beads and requiring long incubation times to achieve high capture efficiencies, especially in highly viscous patient samples. This report describes a two-magnet strategy to improve the interaction of the bead surface with the surrounding fluid inside of a pre-arrayed, self-contained assay-in-a-tube. In the two-magnet system, target biomarker capture occurs at a rate three times faster than the single-magnet system. In clinically relevant biomatrices, we find a 2.5-fold improvement in biomarker capture at lower sample viscosities with the two-magnet system. In addition, we observe a 20% increase in the amount of protein captured at high viscosity for the two-magnet configuration relative to the single magnet approach. The two-magnet approach offers a means to achieve higher biomolecule extraction yields and shorter assay times in magnetic capture assays and in self-contained processor designs. PMID:27158286
Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study
NASA Technical Reports Server (NTRS)
Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia
2015-01-01
Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-01-01
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-12-12
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.
Peña, José M; Torres-Sánchez, Jorge; Serrano-Pérez, Angélica; de Castro, Ana I; López-Granados, Francisca
2015-03-06
In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5-6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations.
Peña, José M.; Torres-Sánchez, Jorge; Serrano-Pérez, Angélica; de Castro, Ana I.; López-Granados, Francisca
2015-01-01
In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5–6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations. PMID:25756867
Historic Methods for Capturing Magnetic Field Images
ERIC Educational Resources Information Center
Kwan, Alistair
2016-01-01
I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance.
Proença, Hugo; Filipe, Sílvio; Santos, Ricardo; Oliveira, João; Alexandre, Luís A
2010-08-01
The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.
Pollen, Alex A; Nowakowski, Tomasz J; Shuga, Joe; Wang, Xiaohui; Leyrat, Anne A; Lui, Jan H; Li, Nianzhen; Szpankowski, Lukasz; Fowler, Brian; Chen, Peilin; Ramalingam, Naveen; Sun, Gang; Thu, Myo; Norris, Michael; Lebofsky, Ronald; Toppani, Dominique; Kemp, Darnell W; Wong, Michael; Clerkson, Barry; Jones, Brittnee N; Wu, Shiquan; Knutsson, Lawrence; Alvarado, Beatriz; Wang, Jing; Weaver, Lesley S; May, Andrew P; Jones, Robert C; Unger, Marc A; Kriegstein, Arnold R; West, Jay A A
2014-10-01
Large-scale surveys of single-cell gene expression have the potential to reveal rare cell populations and lineage relationships but require efficient methods for cell capture and mRNA sequencing. Although cellular barcoding strategies allow parallel sequencing of single cells at ultra-low depths, the limitations of shallow sequencing have not been investigated directly. By capturing 301 single cells from 11 populations using microfluidics and analyzing single-cell transcriptomes across downsampled sequencing depths, we demonstrate that shallow single-cell mRNA sequencing (~50,000 reads per cell) is sufficient for unbiased cell-type classification and biomarker identification. In the developing cortex, we identify diverse cell types, including multiple progenitor and neuronal subtypes, and we identify EGR1 and FOS as previously unreported candidate targets of Notch signaling in human but not mouse radial glia. Our strategy establishes an efficient method for unbiased analysis and comparison of cell populations from heterogeneous tissue by microfluidic single-cell capture and low-coverage sequencing of many cells.
N-Way FRET Microscopy of Multiple Protein-Protein Interactions in Live Cells
Hoppe, Adam D.; Scott, Brandon L.; Welliver, Timothy P.; Straight, Samuel W.; Swanson, Joel A.
2013-01-01
Fluorescence Resonance Energy Transfer (FRET) microscopy has emerged as a powerful tool to visualize nanoscale protein-protein interactions while capturing their microscale organization and millisecond dynamics. Recently, FRET microscopy was extended to imaging of multiple donor-acceptor pairs, thereby enabling visualization of multiple biochemical events within a single living cell. These methods require numerous equations that must be defined on a case-by-case basis. Here, we present a universal multispectral microscopy method (N-Way FRET) to enable quantitative imaging for any number of interacting and non-interacting FRET pairs. This approach redefines linear unmixing to incorporate the excitation and emission couplings created by FRET, which cannot be accounted for in conventional linear unmixing. Experiments on a three-fluorophore system using blue, yellow and red fluorescent proteins validate the method in living cells. In addition, we propose a simple linear algebra scheme for error propagation from input data to estimate the uncertainty in the computed FRET images. We demonstrate the strength of this approach by monitoring the oligomerization of three FP-tagged HIV Gag proteins whose tight association in the viral capsid is readily observed. Replacement of one FP-Gag molecule with a lipid raft-targeted FP allowed direct observation of Gag oligomerization with no association between FP-Gag and raft-targeted FP. The N-Way FRET method provides a new toolbox for capturing multiple molecular processes with high spatial and temporal resolution in living cells. PMID:23762252
NASA Astrophysics Data System (ADS)
Shah, S. M.; Crawshaw, J. P.; Gray, F.; Yang, J.; Boek, E. S.
2017-06-01
In the last decade, the study of fluid flow in porous media has developed considerably due to the combination of X-ray Micro Computed Tomography (micro-CT) and advances in computational methods for solving complex fluid flow equations directly or indirectly on reconstructed three-dimensional pore space images. In this study, we calculate porosity and single phase permeability using micro-CT imaging and Lattice Boltzmann (LB) simulations for 8 different porous media: beadpacks (with bead sizes 50 μm and 350 μm), sandpacks (LV60 and HST95), sandstones (Berea, Clashach and Doddington) and a carbonate (Ketton). Combining the observed porosity and calculated single phase permeability, we shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging. Our study applies the concept of the 'Convex Hull' to calculate the REV by considering the two main macroscopic petrophysical parameters, porosity and single phase permeability, simultaneously. The shape of the hull can be used to identify strong correlation between the parameters or greatly differing convergence rates. To further enhance computational efficiency we note that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size so that only a few small simulations are needed to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.
Portable LED-induced autofluorescence imager with a probe of L shape for oral cancer diagnosis
NASA Astrophysics Data System (ADS)
Huang, Ting-Wei; Lee, Yu-Cheng; Cheng, Nai-Lun; Yan, Yung-Jhe; Chiang, Hou-Chi; Chiou, Jin-Chern; Mang, Ou-Yang
2015-08-01
The difference of spectral distribution between lesions of epithelial cells and normal cells after excited fluorescence is one of methods for the cancer diagnosis. In our previous work, we developed a portable LED Induced autofluorescence (LIAF) imager contained the multiple wavelength of LED excitation light and multiple filters to capture ex-vivo oral tissue autofluorescence images. Our portable system for detection of oral cancer has a probe in front of the lens for fixing the object distance. The shape of the probe is cone, and it is not convenient for doctor to capture the oral image under an appropriate view angle in front of the probe. Therefore, a probe of L shape containing a mirror is proposed for doctors to capture the images with the right angles, and the subjects do not need to open their mouse constrainedly. Besides, a glass plate is placed in probe to prevent the liquid entering in the body, but the light reflected from the glass plate directly causes the light spots inside the images. We set the glass plate in front of LED to avoiding the light spots. When the distance between the glasses plate and the LED model plane is less than the critical value, then we can prevent the light spots caused from the glasses plate. The experiments show that the image captured with the new probe that the glasses plate placed in the back-end of the probe has no light spots inside the image.
Design and performance of single photon APD focal plane arrays for 3-D LADAR imaging
NASA Astrophysics Data System (ADS)
Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph
2010-08-01
×We describe the design, fabrication, and performance of focal plane arrays (FPAs) for use in 3-D LADAR imaging applications requiring single photon sensitivity. These 32 × 32 FPAs provide high-efficiency single photon sensitivity for three-dimensional LADAR imaging applications at 1064 nm. Our GmAPD arrays are designed using a planarpassivated avalanche photodiode device platform with buried p-n junctions that has demonstrated excellent performance uniformity, operational stability, and long-term reliability. The core of the FPA is a chip stack formed by hybridizing the GmAPD photodiode array to a custom CMOS read-out integrated circuit (ROIC) and attaching a precision-aligned GaP microlens array (MLA) to the back-illuminated detector array. Each ROIC pixel includes an active quenching circuit governing Geiger-mode operation of the corresponding avalanche photodiode pixel as well as a pseudo-random counter to capture per-pixel time-of-flight timestamps in each frame. The FPA has been designed to operate at frame rates as high as 186 kHz for 2 μs range gates. Effective single photon detection efficiencies as high as 40% (including all optical transmission and MLA losses) are achieved for dark count rates below 20 kHz. For these planar-geometry diffused-junction GmAPDs, isolation trenches are used to reduce crosstalk due to hot carrier luminescence effects during avalanche events, and we present details of the crosstalk performance for different operating conditions. Direct measurement of temporal probability distribution functions due to cumulative timing uncertainties of the GmAPDs and ROIC circuitry has demonstrated a FWHM timing jitter as low as 265 ps (standard deviation is ~100 ps).
NASA Astrophysics Data System (ADS)
Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.
2004-02-01
This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.
Comparison of three-dimensional surface-imaging systems.
Tzou, Chieh-Han John; Artner, Nicole M; Pona, Igor; Hold, Alina; Placheta, Eva; Kropatsch, Walter G; Frey, Manfred
2014-04-01
In recent decades, three-dimensional (3D) surface-imaging technologies have gained popularity worldwide, but because most published articles that mention them are technical, clinicians often have difficulties gaining a proper understanding of them. This article aims to provide the reader with relevant information on 3D surface-imaging systems. In it, we compare the most recent technologies to reveal their differences. We have accessed five international companies with the latest technologies in 3D surface-imaging systems: 3dMD, Axisthree, Canfield, Crisalix and Dimensional Imaging (Di3D; in alphabetical order). We evaluated their technical equipment, independent validation studies and corporate backgrounds. The fastest capturing devices are the 3dMD and Di3D systems, capable of capturing images within 1.5 and 1 ms, respectively. All companies provide software for tissue modifications. Additionally, 3dMD, Canfield and Di3D can fuse computed tomography (CT)/cone-beam computed tomography (CBCT) images into their 3D surface-imaging data. 3dMD and Di3D provide 4D capture systems, which allow capturing the movement of a 3D surface over time. Crisalix greatly differs from the other four systems as it is purely web based and realised via cloud computing. 3D surface-imaging systems are becoming important in today's plastic surgical set-ups, taking surgeons to a new level of communication with patients, surgical planning and outcome evaluation. Technologies used in 3D surface-imaging systems and their intended field of application vary within the companies evaluated. Potential users should define their requirements and assignment of 3D surface-imaging systems in their clinical as research environment before making the final decision for purchase. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery.
Ratliff, Bradley M; LaCasse, Charles F; Tyo, J Scott
2009-05-25
Microgrid polarimeters are composed of an array of micro-polarizing elements overlaid upon an FPA sensor. In the past decade systems have been designed and built in all regions of the optical spectrum. These systems have rugged, compact designs and the ability to obtain a complete set of polarimetric measurements during a single image capture. However, these systems acquire the polarization measurements through spatial modulation and each measurement has a varying instantaneous field-of-view (IFOV). When these measurements are combined to estimate the polarization images, strong edge artifacts are present that severely degrade the estimated polarization imagery. These artifacts can be reduced when interpolation strategies are first applied to the intensity data prior to Stokes vector estimation. Here we formally study IFOV error and the performance of several bilinear interpolation strategies used for reducing it.
Real-time Mesoscale Visualization of Dynamic Damage and Reaction in Energetic Materials under Impact
NASA Astrophysics Data System (ADS)
Chen, Wayne; Harr, Michael; Kerschen, Nicholas; Maris, Jesus; Guo, Zherui; Parab, Niranjan; Sun, Tao; Fezzaa, Kamel; Son, Steven
Energetic materials may be subjected to impact and vibration loading. Under these dynamic loadings, local stress or strain concentrations may lead to the formation of hot spots and unintended reaction. To visualize the dynamic damage and reaction processes in polymer bonded energetic crystals under dynamic compressive loading, a high speed X-ray phase contrast imaging setup was synchronized with a Kolsky bar and a light gas gun. Controlled compressive loading was applied on PBX specimens with a single or multiple energetic crystal particles and impact-induced damage and reaction processes were captured using the high speed X-ray imaging setup. Impact velocities were systematically varied to explore the critical conditions for reaction. At lower loading rates, ultrasonic exercitations were also applied to progressively damage the crystals, eventually leading to reaction. AFOSR, ONR.
Biodegradable nano-films for capture and non-invasive release of circulating tumor cells.
Li, Wei; Reátegui, Eduardo; Park, Myoung-Hwan; Castleberry, Steven; Deng, Jason Z; Hsu, Bryan; Mayner, Sarah; Jensen, Anne E; Sequist, Lecia V; Maheswaran, Shyamala; Haber, Daniel A; Toner, Mehmet; Stott, Shannon L; Hammond, Paula T
2015-10-01
Selective isolation and purification of circulating tumor cells (CTCs) from whole blood is an important capability for both clinical medicine and biological research. Current techniques to perform this task place the isolated cells under excessive stresses that reduce cell viability, and potentially induce phenotype change, therefore losing valuable information about the isolated cells. We present a biodegradable nano-film coating on the surface of a microfluidic chip, which can be used to effectively capture as well as non-invasively release cancer cell lines such as PC-3, LNCaP, DU 145, H1650 and H1975. We have applied layer-by-layer (LbL) assembly to create a library of ultrathin coatings using a broad range of materials through complementary interactions. By developing an LbL nano-film coating with an affinity-based cell-capture surface that is capable of selectively isolating cancer cells from whole blood, and that can be rapidly degraded on command, we are able to gently isolate cancer cells and recover them without compromising cell viability or proliferative potential. Our approach has the capability to overcome practical hurdles and provide viable cancer cells for downstream analyses, such as live cell imaging, single cell genomics, and in vitro cell culture of recovered cells. Furthermore, CTCs from cancer patients were also captured, identified, and successfully released using the LbL-modified microchips. Published by Elsevier Ltd.
Capture and X-ray diffraction studies of protein microcrystals in a microfluidic trap array
Lyubimov, Artem Y.; Murray, Thomas D.; Koehl, Antoine; ...
2015-03-27
X-ray free-electron lasers (XFELs) promise to enable the collection of interpretable diffraction data from samples that are refractory to data collection at synchrotron sources. At present, however, more efficient sample-delivery methods that minimize the consumption of microcrystalline material are needed to allow the application of XFEL sources to a wide range of challenging structural targets of biological importance. Here, a microfluidic chip is presented in which microcrystals can be captured at fixed, addressable points in a trap array from a small volume (<10 µl) of a pre-existing slurry grown off-chip. The device can be mounted on a standard goniostat formore » conducting diffraction experiments at room temperature without the need for flash-cooling. Proof-of-principle tests with a model system (hen egg-white lysozyme) demonstrated the high efficiency of the microfluidic approach for crystal harvesting, permitting the collection of sufficient data from only 265 single-crystal still images to permit determination and refinement of the structure of the protein. This work shows that microfluidic capture devices can be readily used to facilitate data collection from protein microcrystals grown in traditional laboratory formats, enabling analysis when cryopreservation is problematic or when only small numbers of crystals are available. Such microfluidic capture devices may also be useful for data collection at synchrotron sources.« less
Capture and X-ray diffraction studies of protein microcrystals in a microfluidic trap array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyubimov, Artem Y.; Murray, Thomas D.; Koehl, Antoine
X-ray free-electron lasers (XFELs) promise to enable the collection of interpretable diffraction data from samples that are refractory to data collection at synchrotron sources. At present, however, more efficient sample-delivery methods that minimize the consumption of microcrystalline material are needed to allow the application of XFEL sources to a wide range of challenging structural targets of biological importance. Here, a microfluidic chip is presented in which microcrystals can be captured at fixed, addressable points in a trap array from a small volume (<10 µl) of a pre-existing slurry grown off-chip. The device can be mounted on a standard goniostat formore » conducting diffraction experiments at room temperature without the need for flash-cooling. Proof-of-principle tests with a model system (hen egg-white lysozyme) demonstrated the high efficiency of the microfluidic approach for crystal harvesting, permitting the collection of sufficient data from only 265 single-crystal still images to permit determination and refinement of the structure of the protein. This work shows that microfluidic capture devices can be readily used to facilitate data collection from protein microcrystals grown in traditional laboratory formats, enabling analysis when cryopreservation is problematic or when only small numbers of crystals are available. Such microfluidic capture devices may also be useful for data collection at synchrotron sources.« less
NASA Astrophysics Data System (ADS)
Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Van Esch, P.; Zeitelhack, K.
2012-08-01
A custom and fully interactive simulation package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations) has been developed to optimize the design and operation conditions of secondary scintillation Anger-camera type gaseous detectors for thermal neutron imaging. The simulation code accounts for all physical processes related to the neutron capture, energy deposition pattern, drift of electrons of the primary ionization and secondary scintillation. The photons are traced considering the wavelength-resolved refraction and transmission of the output window. Photo-detection accounts for the wavelength-resolved quantum efficiency, angular response, area sensitivity, gain and single-photoelectron spectra of the photomultipliers (PMTs). The package allows for several geometrical shapes of the PMT photocathode (round, hexagonal and square) and offers a flexible PMT array configuration: up to 100 PMTs in a custom arrangement with the square or hexagonal packing. Several read-out patterns of the PMT array are implemented. Reconstruction of the neutron capture position (projection on the plane of the light emission) is performed using the center of gravity, maximum likelihood or weighted least squares algorithm. Simulation results reproduce well the preliminary results obtained with a small-scale detector prototype. ANTS executables can be downloaded from http://coimbra.lip.pt/~andrei/.
Immunomagnetic Nano-Screening Chip for Circulating Tumor Cells Detection in Blood
NASA Astrophysics Data System (ADS)
Horton, A. P.; Lane, N.; Tam, J.; Sokolov, K.; Garner, H. R.; Uhr, J. W.; Zhang, X. J.
2010-03-01
We present a novel method towards diagnose cancer at an early stage via a blood test. Early diagnosis is high on the future agenda of oncologists because of significant evidence that it will result in a higher cure rate. Capture of circulating tumor cells (CTCs) which are known to escape from carcinomas at an early stage offers such an opportunity. We design, fabricate and optimize the nanomagnetic-screening chip that captures the CTCs in microfluid, and further integrate the nano-chip with the new multispectral imaging system so that it can quantify different tumor markers and automate the entire instrument. Specifically, hybrid plasmonic (Fe2O3-core Au shell) nanoparticles, conjugated a collection of antibodies especially chosen to target breast cancer CTCs, with high magnetic susceptibility will be used for effective immunomagnetic CTC isolation. Greatly increased sensitivity over previous attempts is demonstrated by decreasing the length scale for interactions between the magnetic-nanoparticle-tagged CTCs and the isolative magnetic field, while increasing the effective cross-sectional area over which this interaction takes place. The screening chip is integrated with a novel hyperspectral microscopic imaging (HMI) platform capable of recording the entire emission spectra in a single pass evaluation. The combined system will precisely quantify up to 10 tumor markers on CTCs.
Himalia, a Small Moon of Jupiter
NASA Technical Reports Server (NTRS)
2001-01-01
NASA's Cassini spacecraft captured images of Himalia, the brightest of Jupiter's outer moons, on Dec. 19, 2000, from a distance of 4.4 million kilometers (2.7 million miles).This near-infrared image, with a resolution of about 27 kilometers (17 miles) per pixel, indicates that the side of Himalia facing the spacecraft is roughly 160 kilometers (100 miles) in the up-down direction. Himalia probably has a non-spherical shape. Scientists believe it is a body captured into orbit around Jupiter, most likely an irregularly shaped asteroid.In the main frame, an arrow indicates Himalia. North is up. The inset shows the little moon magnified by a factor of 10, plus a graphic indicating Himalia's size and the direction of lighting (with sunlight coming from the left). Cassini's pictures of Himalia were taken during a brief period when Cassini's attitude was stabilized by thrusters instead of by a steadier reaction-wheel system. No spacecraft or telescope had previously shown any of Jupiter's outer moons as more than a star-like single dot.Cassini is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini mission for NASA's Office of Space Science, Washington, D.C.